Loading...
Guest user 
[06:48:21]The build is removed from the queue to be prepared for the start
[06:48:21]Collecting changes in 2 VCS roots (1s)
[06:48:23]Starting the build on the agent gce-agent-45188
[06:48:23]Clearing temporary directory: /home/agent/temp/buildTmp
[06:48:23]Publishing internal artifacts (2s)
[06:48:23]Using vcs information from agent file: b97f2946_cockroach.xml
[06:48:23]Checkout directory: /home/agent/work/.go/src/github.com/cockroachdb/cockroach
[06:48:23]Updating sources: agent side checkout (2s)
[06:48:25]Step 1/2: Install prerequisites (Install prerequisites) (2s)
[06:48:28]Step 2/2: Run Stress Tests (Command Line) (1h:54m:36s)
[06:48:28][Step 2/2] Starting: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/build/teamcity-stress.sh
[06:48:28][Step 2/2] in directory: /home/agent/work/.go/src/github.com/cockroachdb/cockroach
[06:48:28][Step 2/2] export BUILDER_HIDE_GOPATH_SRC=1
[06:48:28][Step 2/2] mkdir -p artifacts
[06:48:28][Step 2/2] export COCKROACH_BUILDER_CCACHE=1
[06:48:28][Step 2/2] + go install ./pkg/cmd/github-post
[06:48:29][Step 2/2] + make stress PKG=github.com/cockroachdb/cockroach/pkg/storage TESTTIMEOUT=40m GOFLAGS=-race TAGS= 'STRESSFLAGS=-maxruns 100 -maxfails 1 -stderr -p 4'
[06:48:29][Step 2/2] + tee artifacts/stress.log
[06:48:29][Step 2/2] GOPATH set to /go
[06:48:30][Step 2/2] Running make with -j8
[06:48:30][Step 2/2] GOPATH set to /go
[06:48:30][Step 2/2] Detected change in build system. Rebooting Make.
[06:48:30][Step 2/2] Running make with -j8
[06:48:30][Step 2/2] GOPATH set to /go
[06:48:30][Step 2/2] cd /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc && autoconf
[06:48:30][Step 2/2] git submodule update --init --recursive
[06:48:30][Step 2/2] go install -v ./pkg/cmd/prereqs
[06:48:30][Step 2/2] mkdir -p pkg/sql/parser/gen
[06:48:30][Step 2/2] awk -f pkg/sql/parser/help.awk < pkg/sql/parser/sql.y > pkg/sql/parser/help_messages.go.tmp || rm pkg/sql/parser/help_messages.go.tmp
[06:48:30][Step 2/2] set -euo pipefail; \
[06:48:30][Step 2/2] TYPES=$(awk '/func.*sqlSymUnion/ {print $(NF - 1)}' pkg/sql/parser/sql.y | sed -e 's/[]\/$*.^|[]/\\&/g' | tr '\n' '|' | sed -E '$s/.$//'); \
[06:48:30][Step 2/2] sed -E "s_(type|token) <($TYPES)>_\1 <union> /* <\2> */_" < pkg/sql/parser/sql.y | \
[06:48:30][Step 2/2] awk -f pkg/sql/parser/replace_help_rules.awk | \
[06:48:30][Step 2/2] sed -Ee 's,//.*$,,g;s,/[*]([^*]|[*][^/])*[*]/, ,g;s/ +$//g' > pkg/sql/parser/gen/sql-gen.y.tmp || rm pkg/sql/parser/gen/sql-gen.y.tmp
[06:48:30][Step 2/2] mv -f pkg/sql/parser/help_messages.go.tmp pkg/sql/parser/help_messages.go
[06:48:30][Step 2/2] gofmt -s -w pkg/sql/parser/help_messages.go
[06:48:30][Step 2/2] awk -f pkg/sql/parser/all_keywords.awk < pkg/sql/parser/sql.y > pkg/sql/lex/keywords.go.tmp || rm pkg/sql/lex/keywords.go.tmp
[06:48:30][Step 2/2] awk -f pkg/sql/parser/reserved_keywords.awk < pkg/sql/parser/sql.y > pkg/sql/lex/reserved_keywords.go.tmp || rm pkg/sql/lex/reserved_keywords.go.tmp
[06:48:30][Step 2/2] mv -f pkg/sql/lex/keywords.go.tmp pkg/sql/lex/keywords.go
[06:48:30][Step 2/2] gofmt -s -w pkg/sql/lex/keywords.go
[06:48:30][Step 2/2] mv -f pkg/sql/parser/gen/sql-gen.y.tmp pkg/sql/parser/gen/sql-gen.y
[06:48:30][Step 2/2] mv -f pkg/sql/lex/reserved_keywords.go.tmp pkg/sql/lex/reserved_keywords.go
[06:48:30][Step 2/2] gofmt -s -w pkg/sql/lex/reserved_keywords.go
[06:48:30][Step 2/2] mv -f pkg/sql/parser/helpmap_test.go.tmp pkg/sql/parser/helpmap_test.go
[06:48:30][Step 2/2] gofmt -s -w pkg/sql/parser/helpmap_test.go
[06:48:30][Step 2/2] github.com/cockroachdb/cockroach/pkg/cmd/prereqs
[06:48:30][Step 2/2] mkdir -p bin
[06:48:30][Step 2/2] touch bin/.submodules-initialized
[06:48:30][Step 2/2] go install -v protoc-gen-gogoroach
[06:48:30][Step 2/2] go install -v uptodate
[06:48:30][Step 2/2] bin/prereqs ./pkg/cmd/protoc-gen-gogoroach > bin/protoc-gen-gogoroach.d.tmp
[06:48:30][Step 2/2] go install -v optgen
[06:48:30][Step 2/2] bin/prereqs ./pkg/cmd/uptodate > bin/uptodate.d.tmp
[06:48:30][Step 2/2] bin/prereqs ./pkg/sql/opt/optgen/cmd/optgen > bin/optgen.d.tmp
[06:48:30][Step 2/2] mv -f bin/uptodate.d.tmp bin/uptodate.d
[06:48:30][Step 2/2] mv -f bin/optgen.d.tmp bin/optgen.d
[06:48:30][Step 2/2] mv -f bin/protoc-gen-gogoroach.d.tmp bin/protoc-gen-gogoroach.d
[06:48:30][Step 2/2] rm -rf /go/native/x86_64-pc-linux-gnu/jemalloc
[06:48:30][Step 2/2] mkdir -p /go/native/x86_64-pc-linux-gnu/jemalloc
[06:48:30][Step 2/2] cd /go/native/x86_64-pc-linux-gnu/jemalloc && /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/configure --enable-prof
[06:48:30][Step 2/2] github.com/cockroachdb/cockroach/pkg/sql/opt/optgen/lang
[06:48:30][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/MichaelTJones/walk
[06:48:30][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/gostdlib/x/tools/internal/fastwalk
[06:48:30][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/ttycolor
[06:48:30][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/gostdlib/go/printer
[06:48:30][Step 2/2] github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/go/ast/astutil
[06:48:30][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/stress
[06:48:30][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/spf13/pflag
[06:48:30][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/Masterminds/semver
[06:48:30][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/armon/go-radix
[06:48:30][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/client9/misspell
[06:48:30][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/proto
[06:48:30][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/Masterminds/vcs
[06:48:30][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/boltdb/bolt
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/protobuf/proto
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/gostdlib/go/format
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/gostdlib/cmd/gofmt
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/gps/paths
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/gostdlib/x/tools/imports
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/pkg/sql/opt/optgen/cmd/optgen
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/gps/pkgtree
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/internal/fs
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/pkg/cmd/uptodate
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/jmank88/nuts
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/nightlyone/lockfile
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/sdboyer/constext
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/pelletier/go-toml
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/vendor/gopkg.in/yaml.v2
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/go/gcimporter15
[06:48:31][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/glog
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/protoc-gen-gogo/descriptor
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/client9/misspell/cmd/misspell
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/protobuf/protoc-gen-go/generator/internal/remap
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-gateway/utilities
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/crlfmt
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/cockroachdb/gostdlib/x/tools/cmd/goimports
[06:48:32][Step 2/2] checking for xsltproc... false
[06:48:32][Step 2/2] checking for gcc... no
[06:48:32][Step 2/2] checking for cc... cc
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway/httprule
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/go/gcexportdata
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/jteeuwen/go-bindata
[06:48:32][Step 2/2] optgen -out pkg/sql/opt/memo/expr.og.go exprs pkg/sql/opt/ops/*.opt
[06:48:32][Step 2/2] optgen -out pkg/sql/opt/operator.og.go ops pkg/sql/opt/ops/*.opt
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/gps/internal/pb
[06:48:32][Step 2/2] optgen -out pkg/sql/opt/xform/explorer.og.go explorer pkg/sql/opt/ops/*.opt pkg/sql/opt/xform/rules/*.opt
[06:48:32][Step 2/2] optgen -out pkg/sql/opt/norm/factory.og.go factory pkg/sql/opt/ops/*.opt pkg/sql/opt/norm/rules/*.opt
[06:48:32][Step 2/2] optgen -out pkg/sql/opt/rule_name.og.go rulenames pkg/sql/opt/ops/*.opt pkg/sql/opt/norm/rules/*.opt pkg/sql/opt/xform/rules/*.opt
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/protobuf/protoc-gen-go/descriptor
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/lint
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/gps
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/go/buildutil
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/jteeuwen/go-bindata/go-bindata
[06:48:32][Step 2/2] rm -rf /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos
[06:48:32][Step 2/2] rm -rf /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protosccl
[06:48:32][Step 2/2] mkdir -p /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protosccl
[06:48:32][Step 2/2] build/werror.sh /go/native/x86_64-pc-linux-gnu/protobuf/protoc -Ipkg:./vendor/github.com/gogo/protobuf:./vendor/github.com/gogo/protobuf/protobuf --cpp_out=lite:/go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protosccl ./pkg/ccl/baseccl/encryption_options.proto ./pkg/ccl/storageccl/engineccl/enginepbccl/key_registry.proto ./pkg/ccl/storageccl/engineccl/enginepbccl/stats.proto
[06:48:32][Step 2/2] mkdir -p /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/protoc-gen-gogo/plugin
[06:48:32][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/gogoproto
[06:48:32][Step 2/2] build/werror.sh /go/native/x86_64-pc-linux-gnu/protobuf/protoc -Ipkg:./vendor/github.com/gogo/protobuf:./vendor/github.com/gogo/protobuf/protobuf --cpp_out=lite:/go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos ./pkg/roachpb/data.proto ./pkg/roachpb/internal.proto ./pkg/roachpb/metadata.proto ./pkg/storage/engine/enginepb/file_registry.proto ./pkg/storage/engine/enginepb/mvcc.proto ./pkg/storage/engine/enginepb/mvcc3.proto ./pkg/storage/engine/enginepb/rocksdb.proto ./pkg/util/hlc/legacy_timestamp.proto ./pkg/util/hlc/timestamp.proto ./pkg/util/unresolved_addr.proto
[06:48:32][Step 2/2] checking whether the C compiler works... sed -i -E '/gogoproto/d' /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protosccl/ccl/baseccl/encryption_options.pb.h /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protosccl/ccl/storageccl/engineccl/enginepbccl/key_registry.pb.h /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protosccl/ccl/storageccl/engineccl/enginepbccl/stats.pb.h /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protosccl/ccl/baseccl/encryption_options.pb.cc /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protosccl/ccl/storageccl/engineccl/enginepbccl/key_registry.pb.cc /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protosccl/ccl/storageccl/engineccl/enginepbccl/stats.pb.cc
[06:48:33][Step 2/2] touch bin/.cpp_ccl_protobuf_sources
[06:48:33][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/protobuf/protoc-gen-go/plugin
[06:48:33][Step 2/2] github.com/cockroachdb/cockroach/vendor/google.golang.org/genproto/googleapis/api/annotations
[06:48:33][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/vanity
[06:48:33][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/protoc-gen-gogo/generator
[06:48:33][Step 2/2] sed -i -E '/gogoproto/d' /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/roachpb/data.pb.h /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/roachpb/internal.pb.h /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/roachpb/metadata.pb.h /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/storage/engine/enginepb/file_registry.pb.h /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/storage/engine/enginepb/mvcc.pb.h /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/storage/engine/enginepb/mvcc3.pb.h /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/storage/engine/enginepb/rocksdb.pb.h /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/util/hlc/legacy_timestamp.pb.h /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/util/hlc/timestamp.pb.h /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/util/unresolved_addr.pb.h /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/roachpb/data.pb.cc /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/roachpb/internal.pb.cc /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/roachpb/metadata.pb.cc /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/storage/engine/enginepb/file_registry.pb.cc /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/storage/engine/enginepb/mvcc.pb.cc /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/storage/engine/enginepb/mvcc3.pb.cc /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/storage/engine/enginepb/rocksdb.pb.cc /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/util/hlc/legacy_timestamp.pb.cc /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/util/hlc/timestamp.pb.cc /go/src/github.com/cockroachdb/cockroach/c-deps/libroach/protos/util/unresolved_addr.pb.cc
[06:48:33][Step 2/2] yes
[06:48:33][Step 2/2] checking for C compiler default output file name... a.out
[06:48:33][Step 2/2] checking for suffix of executables... github.com/cockroachdb/cockroach/vendor/github.com/golang/lint/golint
[06:48:33][Step 2/2] github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/go/loader
[06:48:33][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/protobuf/protoc-gen-go/generator
[06:48:33][Step 2/2] touch bin/.cpp_protobuf_sources
[06:48:33][Step 2/2]
[06:48:33][Step 2/2] checking whether we are cross compiling... github.com/cockroachdb/cockroach/vendor/github.com/kisielk/errcheck/internal/errcheck
[06:48:33][Step 2/2] no
[06:48:33][Step 2/2] checking for suffix of object files... github.com/cockroachdb/cockroach/vendor/github.com/kisielk/gotool/internal/load
[06:48:33][Step 2/2] github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/cover
[06:48:33][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/mibk/dupl/suffixtree
[06:48:33][Step 2/2] o
[06:48:33][Step 2/2] checking whether we are using the GNU C compiler... yes
[06:48:33][Step 2/2] checking whether cc accepts -g... yes
[06:48:33][Step 2/2] checking for cc option to accept ISO C89... github.com/cockroachdb/cockroach/vendor/golang.org/x/perf/internal/stats
[06:48:33][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/mibk/dupl/syntax
[06:48:33][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/mattn/goveralls
[06:48:33][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/kisielk/gotool
[06:48:33][Step 2/2] none needed
[06:48:33][Step 2/2] checking whether compiler is cray... github.com/cockroachdb/cockroach/vendor/github.com/mibk/dupl/syntax/golang
[06:48:33][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/kisielk/errcheck
[06:48:34][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/mibk/dupl/output
[06:48:34][Step 2/2] Scanning dependencies of target roach
[06:48:34][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/mibk/dupl/job
[06:48:34][Step 2/2] [ 2%] Building CXX object CMakeFiles/roach.dir/batch.cc.o
[06:48:34][Step 2/2] [ 5%] Building CXX object CMakeFiles/roach.dir/encoding.cc.o
[06:48:34][Step 2/2] no
[06:48:34][Step 2/2] [ 7%] Building CXX object CMakeFiles/roach.dir/chunked_buffer.cc.o
[06:48:34][Step 2/2] checking whether compiler supports -std=gnu11... [ 10%] Building CXX object CMakeFiles/roach.dir/db.cc.o
[06:48:34][Step 2/2] yes
[06:48:34][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/wadey/gocovmerge
[06:48:34][Step 2/2] [ 12%] Building CXX object CMakeFiles/roach.dir/comparator.cc.o
[06:48:34][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/mibk/dupl
[06:48:34][Step 2/2] [ 15%] Building CXX object CMakeFiles/roach.dir/engine.cc.o
[06:48:34][Step 2/2] checking whether compiler supports -Wall... [ 17%] Building CXX object CMakeFiles/roach.dir/merge.cc.o
[06:48:34][Step 2/2] [ 20%] Building CXX object CMakeFiles/roach.dir/iterator.cc.o
[06:48:34][Step 2/2] [ 22%] Building CXX object CMakeFiles/roach.dir/file_registry.cc.o
[06:48:34][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/defaultcheck
[06:48:34][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/embedcheck
[06:48:34][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/testgen
[06:48:34][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/enumstringer
[06:48:34][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/oneofcheck
[06:48:34][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/populate
[06:48:34][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/marshalto
[06:48:34][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/unmarshal
[06:48:34][Step 2/2] [ 25%] Building CXX object CMakeFiles/roach.dir/options.cc.o
[06:48:34][Step 2/2] [ 27%] Building CXX object CMakeFiles/roach.dir/mvcc.cc.o
[06:48:34][Step 2/2] yes
[06:48:34][Step 2/2] checking whether compiler supports -Werror=declaration-after-statement... [ 30%] Building CXX object CMakeFiles/roach.dir/timebound.cc.o
[06:48:34][Step 2/2] [ 32%] Building CXX object CMakeFiles/roach.dir/snapshot.cc.o
[06:48:34][Step 2/2] yes
[06:48:34][Step 2/2] [ 35%] Building CXX object CMakeFiles/roach.dir/protos/roachpb/data.pb.cc.o
[06:48:34][Step 2/2] [ 37%] Building CXX object CMakeFiles/roach.dir/protos/roachpb/internal.pb.cc.o
[06:48:34][Step 2/2] checking whether compiler supports -Wshorten-64-to-32... github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway/descriptor
[06:48:34][Step 2/2] [ 40%] Building CXX object CMakeFiles/roach.dir/protos/roachpb/metadata.pb.cc.o
[06:48:34][Step 2/2] yes
[06:48:34][Step 2/2] checking whether compiler supports -Wsign-compare... [ 42%] Building CXX object CMakeFiles/roach.dir/protos/storage/engine/enginepb/mvcc.pb.cc.o
[06:48:34][Step 2/2] [ 45%] Building CXX object CMakeFiles/roach.dir/protos/storage/engine/enginepb/mvcc3.pb.cc.o
[06:48:34][Step 2/2] [ 47%] Building CXX object CMakeFiles/roach.dir/protos/storage/engine/enginepb/file_registry.pb.cc.o
[06:48:34][Step 2/2] github.com/cockroachdb/cockroach/vendor/golang.org/x/perf/storage/benchfmt
[06:48:34][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/protoc-gen-gogo/grpc
[06:48:34][Step 2/2] yes
[06:48:34][Step 2/2] checking whether compiler supports -pipe... github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/cmd/goyacc
[06:48:34][Step 2/2] [ 50%] Building CXX object CMakeFiles/roach.dir/protos/storage/engine/enginepb/rocksdb.pb.cc.o
[06:48:34][Step 2/2] [ 52%] Building CXX object CMakeFiles/roach.dir/protos/util/hlc/legacy_timestamp.pb.cc.o
[06:48:34][Step 2/2] yes
[06:48:34][Step 2/2] checking whether compiler supports -g3... [ 55%] Building CXX object CMakeFiles/roach.dir/rocksdbutils/env_encryption.cc.o
[06:48:34][Step 2/2] yes
[06:48:34][Step 2/2] checking how to run the C preprocessor... [ 57%] Building CXX object CMakeFiles/roach.dir/protos/util/hlc/timestamp.pb.cc.o
[06:48:34][Step 2/2] [ 60%] Building CXX object CMakeFiles/roach.dir/protos/util/unresolved_addr.pb.cc.o
[06:48:34][Step 2/2] github.com/cockroachdb/cockroach/vendor/golang.org/x/perf/benchstat
[06:48:34][Step 2/2] [ 62%] Linking CXX static library libroach.a
[06:48:35][Step 2/2] github.com/cockroachdb/cockroach/vendor/golang.org/x/perf/cmd/benchstat
[06:48:35][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway/generator
[06:48:35][Step 2/2] github.com/cockroachdb/cockroach/vendor/golang.org/x/tools/cmd/stringer
[06:48:35][Step 2/2] cc -E
[06:48:35][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway/gengateway
[06:48:35][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/internal/feedback
[06:48:35][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/gps/verify
[06:48:35][Step 2/2] checking for grep that handles long lines and -e... github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/compare
[06:48:35][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/description
[06:48:35][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/equal
[06:48:35][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/face
[06:48:35][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/gostring
[06:48:35][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/size
[06:48:35][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/stringer
[06:48:35][Step 2/2] /bin/grep
[06:48:35][Step 2/2] checking for egrep... /bin/grep -E
[06:48:35][Step 2/2] checking for ANSI C header files... Scanning dependencies of target roachccl
[06:48:35][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/plugin/union
[06:48:35][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
[06:48:35][Step 2/2] [ 65%] Building CXX object CMakeFiles/roachccl.dir/protosccl/ccl/baseccl/encryption_options.pb.cc.o
[06:48:35][Step 2/2] [ 67%] Building CXX object CMakeFiles/roachccl.dir/ccl/crypto_utils.cc.o
[06:48:35][Step 2/2] [ 70%] Building CXX object CMakeFiles/roachccl.dir/ccl/key_manager.cc.o
[06:48:35][Step 2/2] [ 72%] Building CXX object CMakeFiles/roachccl.dir/ccl/db.cc.o
[06:48:35][Step 2/2] [ 75%] Building CXX object CMakeFiles/roachccl.dir/ccl/ctr_stream.cc.o
[06:48:35][Step 2/2] [ 77%] Building CXX object CMakeFiles/roachccl.dir/protosccl/ccl/storageccl/engineccl/enginepbccl/key_registry.pb.cc.o
[06:48:35][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep
[06:48:35][Step 2/2] [ 80%] Building CXX object CMakeFiles/roachccl.dir/protosccl/ccl/storageccl/engineccl/enginepbccl/stats.pb.cc.o
[06:48:35][Step 2/2] [ 82%] Linking CXX static library libroachccl.a
[06:48:36][Step 2/2] yes
[06:48:36][Step 2/2] checking for sys/types.h... yes
[06:48:36][Step 2/2] checking for sys/stat.h... yes
[06:48:36][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/gogo/protobuf/vanity/command
[06:48:36][Step 2/2] checking for stdlib.h... yes
[06:48:36][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/internal/importers/base
[06:48:36][Step 2/2] checking for string.h... yes
[06:48:36][Step 2/2] checking for memory.h... yes
[06:48:36][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/internal/importers/glide
[06:48:36][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/internal/importers/godep
[06:48:36][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/internal/importers/glock
[06:48:36][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/internal/importers/govend
[06:48:36][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/internal/importers/govendor
[06:48:36][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/internal/importers/gvt
[06:48:36][Step 2/2] checking for strings.h... yes
[06:48:36][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/internal/importers/vndr
[06:48:36][Step 2/2] github.com/cockroachdb/cockroach/pkg/cmd/protoc-gen-gogoroach
[06:48:36][Step 2/2] checking for inttypes.h... yes
[06:48:36][Step 2/2] checking for stdint.h... yes
[06:48:36][Step 2/2] github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/internal/importers
[06:48:36][Step 2/2] checking for unistd.h... yes
[06:48:36][Step 2/2] checking whether byte ordering is bigendian... github.com/cockroachdb/cockroach/vendor/github.com/golang/dep/cmd/dep
[06:48:36][Step 2/2] no
[06:48:36][Step 2/2] checking size of void *... 8
[06:48:36][Step 2/2] checking size of int... 4
[06:48:36][Step 2/2] checking size of long... 8
[06:48:36][Step 2/2] checking size of long long... 8
[06:48:37][Step 2/2] checking size of intmax_t... 8
[06:48:37][Step 2/2] checking build system type... x86_64-pc-linux-gnu
[06:48:37][Step 2/2] checking host system type... x86_64-pc-linux-gnu
[06:48:37][Step 2/2] checking whether pause instruction is compilable... yes
[06:48:37][Step 2/2] checking for ar... ar
[06:48:37][Step 2/2] checking malloc.h usability... yes
[06:48:37][Step 2/2] checking malloc.h presence... yes
[06:48:37][Step 2/2] checking for malloc.h... yes
[06:48:37][Step 2/2] checking whether malloc_usable_size definition can use const argument... no
[06:48:37][Step 2/2] checking for library containing log... -lm
[06:48:37][Step 2/2] checking whether __attribute__ syntax is compilable... yes
[06:48:37][Step 2/2] checking whether compiler supports -fvisibility=hidden... yes
[06:48:37][Step 2/2] checking whether compiler supports -Werror... yes
[06:48:37][Step 2/2] checking whether compiler supports -herror_on_warning... no
[06:48:37][Step 2/2] checking whether tls_model attribute is compilable... yes
[06:48:37][Step 2/2] checking whether compiler supports -Werror... yes
[06:48:37][Step 2/2] checking whether compiler supports -herror_on_warning... no
[06:48:37][Step 2/2] checking whether alloc_size attribute is compilable... touch bin/.bootstrap
[06:48:37][Step 2/2] find ./pkg -name node_modules -prune -o -type f -name '*.pb.go' -exec rm {} +
[06:48:37][Step 2/2] find ./pkg -name node_modules -prune -o -type f -name '*.pb.gw.go' -exec rm {} +
[06:48:37][Step 2/2] set -euo pipefail; \
[06:48:37][Step 2/2] ret=$(cd pkg/sql/parser/gen && goyacc -p sql -o sql.go.tmp sql-gen.y); \
[06:48:37][Step 2/2] if expr "$ret" : ".*conflicts" >/dev/null; then \
[06:48:37][Step 2/2] echo "$ret"; exit 1; \
[06:48:37][Step 2/2] fi
[06:48:37][Step 2/2] stringer -output=pkg/sql/opt/rule_name_string.go -type=RuleName pkg/sql/opt/rule_name.go pkg/sql/opt/rule_name.og.go
[06:48:37][Step 2/2] build/werror.sh /go/native/x86_64-pc-linux-gnu/protobuf/protoc -Ipkg:./vendor/github.com:./vendor/github.com/gogo/protobuf:./vendor/github.com/gogo/protobuf/protobuf:./vendor/go.etcd.io:./vendor/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis --grpc-gateway_out=logtostderr=true,request_context=true:./pkg ./pkg/server/serverpb/admin.proto ./pkg/server/serverpb/status.proto ./pkg/server/serverpb/authentication.proto
[06:48:37][Step 2/2] set -e; for dir in ./pkg/acceptance/cluster/ ./pkg/build/ ./pkg/ccl/backupccl/ ./pkg/ccl/baseccl/ ./pkg/ccl/storageccl/engineccl/enginepbccl/ ./pkg/ccl/utilccl/licenseccl/ ./pkg/config/ ./pkg/gossip/ ./pkg/internal/client/ ./pkg/jobs/jobspb/ ./pkg/roachpb/ ./pkg/rpc/ ./pkg/server/diagnosticspb/ ./pkg/server/serverpb/ ./pkg/server/status/statuspb/ ./pkg/settings/cluster/ ./pkg/sql/distsqlrun/ ./pkg/sql/pgwire/pgerror/ ./pkg/sql/sqlbase/ ./pkg/sql/stats/ ./pkg/storage/ ./pkg/storage/closedts/ctpb/ ./pkg/storage/engine/enginepb/ ./pkg/storage/storagepb/ ./pkg/ts/tspb/ ./pkg/util/ ./pkg/util/hlc/ ./pkg/util/log/ ./pkg/util/metric/ ./pkg/util/protoutil/ ./pkg/util/tracing/; do \
[06:48:37][Step 2/2] build/werror.sh /go/native/x86_64-pc-linux-gnu/protobuf/protoc -Ipkg:./vendor/github.com:./vendor/github.com/gogo/protobuf:./vendor/github.com/gogo/protobuf/protobuf:./vendor/go.etcd.io:./vendor/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis --gogoroach_out=Mgoogle/api/annotations.proto=github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis/google/api,Mgoogle/protobuf/timestamp.proto=github.com/gogo/protobuf/types,Mgoogle/protobuf/any.proto=github.com/gogo/protobuf/types,,plugins=grpc,import_prefix=github.com/cockroachdb/cockroach/pkg/:./pkg $dir/*.proto; \
[06:48:37][Step 2/2] done
[06:48:37][Step 2/2] no
[06:48:37][Step 2/2] checking whether compiler supports -Werror... yes
[06:48:37][Step 2/2] checking whether compiler supports -herror_on_warning... no
[06:48:37][Step 2/2] checking whether format(gnu_printf, ...) attribute is compilable... no
[06:48:37][Step 2/2] checking whether compiler supports -Werror... yes
[06:48:37][Step 2/2] checking whether compiler supports -herror_on_warning... build/werror.sh /go/native/x86_64-pc-linux-gnu/protobuf/protoc -Ipkg:./vendor/github.com:./vendor/github.com/gogo/protobuf:./vendor/github.com/gogo/protobuf/protobuf:./vendor/go.etcd.io:./vendor/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis --grpc-gateway_out=logtostderr=true,request_context=true:./pkg ./pkg/ts/tspb/timeseries.proto
[06:48:37][Step 2/2] sed -i -E 's!golang.org/x/net/context!context!g' ./pkg/server/serverpb/admin.pb.gw.go ./pkg/server/serverpb/status.pb.gw.go ./pkg/server/serverpb/authentication.pb.gw.go ./pkg/ts/tspb/timeseries.pb.gw.go
[06:48:37][Step 2/2] no
[06:48:37][Step 2/2] checking whether format(printf, ...) attribute is compilable... gofmt -s -w ./pkg/server/serverpb/admin.pb.gw.go ./pkg/server/serverpb/status.pb.gw.go ./pkg/server/serverpb/authentication.pb.gw.go ./pkg/ts/tspb/timeseries.pb.gw.go
[06:48:37][Step 2/2] goimports -w ./pkg/server/serverpb/admin.pb.gw.go ./pkg/server/serverpb/status.pb.gw.go ./pkg/server/serverpb/authentication.pb.gw.go ./pkg/ts/tspb/timeseries.pb.gw.go
[06:48:38][Step 2/2] yes
[06:48:38][Step 2/2] checking for a BSD-compatible install... /usr/bin/install -c
[06:48:38][Step 2/2] checking for ranlib... ranlib
[06:48:38][Step 2/2] checking for ld... /usr/bin/ld
[06:48:38][Step 2/2] checking for autoconf... /usr/bin/autoconf
[06:48:38][Step 2/2] checking for memalign... yes
[06:48:38][Step 2/2] checking for valloc... touch bin/.gw_protobuf_sources
[06:48:38][Step 2/2] yes
[06:48:38][Step 2/2] checking whether compiler supports -O3... yes
[06:48:38][Step 2/2] checking whether compiler supports -funroll-loops... yes
[06:48:38][Step 2/2] checking unwind.h usability... yes
[06:48:38][Step 2/2] checking unwind.h presence... yes
[06:48:38][Step 2/2] checking for unwind.h... yes
[06:48:38][Step 2/2] checking for _Unwind_Backtrace in -lgcc... yes
[06:48:38][Step 2/2] checking configured backtracing method... libgcc
[06:48:38][Step 2/2] checking for sbrk... yes
[06:48:38][Step 2/2] checking whether utrace(2) is compilable... no
[06:48:38][Step 2/2] checking whether valgrind is compilable... no
[06:48:38][Step 2/2] checking whether a program using __builtin_unreachable is compilable... yes
[06:48:38][Step 2/2] checking whether a program using __builtin_ffsl is compilable... yes
[06:48:38][Step 2/2] checking LG_PAGE... 12
[06:48:38][Step 2/2] Missing VERSION file, and unable to generate it; creating bogus VERSION
[06:48:38][Step 2/2] checking pthread.h usability... yes
[06:48:38][Step 2/2] checking pthread.h presence... yes
[06:48:38][Step 2/2] checking for pthread.h... yes
[06:48:38][Step 2/2] checking for pthread_create in -lpthread... yes
[06:48:38][Step 2/2] checking whether pthread_atfork(3) is compilable... yes
[06:48:38][Step 2/2] checking for library containing clock_gettime... none required
[06:48:38][Step 2/2] checking whether clock_gettime(CLOCK_MONOTONIC_COARSE, ...) is compilable... yes
[06:48:39][Step 2/2] checking whether clock_gettime(CLOCK_MONOTONIC, ...) is compilable... yes
[06:48:39][Step 2/2] checking whether mach_absolute_time() is compilable... no
[06:48:39][Step 2/2] checking whether compiler supports -Werror... yes
[06:48:39][Step 2/2] checking whether syscall(2) is compilable... yes
[06:48:39][Step 2/2] checking for secure_getenv... yes
[06:48:39][Step 2/2] checking for issetugid... no
[06:48:39][Step 2/2] checking for _malloc_thread_cleanup... no
[06:48:39][Step 2/2] checking for _pthread_mutex_init_calloc_cb... no
[06:48:39][Step 2/2] checking for TLS... yes
[06:48:39][Step 2/2] checking whether C11 atomics is compilable... yes
[06:48:39][Step 2/2] checking whether atomic(9) is compilable... no
[06:48:39][Step 2/2] checking whether Darwin OSAtomic*() is compilable... no
[06:48:39][Step 2/2] checking whether madvise(2) is compilable... yes
[06:48:39][Step 2/2] checking whether madvise(..., MADV_FREE) is compilable... no
[06:48:39][Step 2/2] checking whether madvise(..., MADV_DONTNEED) is compilable... yes
[06:48:39][Step 2/2] checking whether madvise(..., MADV_[NO]HUGEPAGE) is compilable... yes
[06:48:39][Step 2/2] checking whether to force 32-bit __sync_{add,sub}_and_fetch()... no
[06:48:39][Step 2/2] checking whether to force 64-bit __sync_{add,sub}_and_fetch()... no
[06:48:40][Step 2/2] checking for __builtin_clz... yes
[06:48:40][Step 2/2] checking whether Darwin os_unfair_lock_*() is compilable... no
[06:48:40][Step 2/2] checking whether Darwin OSSpin*() is compilable... no
[06:48:40][Step 2/2] checking whether glibc malloc hook is compilable... yes
[06:48:40][Step 2/2] checking whether glibc memalign hook is compilable... yes
[06:48:40][Step 2/2] checking whether pthreads adaptive mutexes is compilable... yes
[06:48:40][Step 2/2] checking for stdbool.h that conforms to C99... yes
[06:48:40][Step 2/2] checking for _Bool... yes
[06:48:40][Step 2/2] configure: creating ./config.status
[06:48:40][Step 2/2] config.status: creating Makefile
[06:48:40][Step 2/2] config.status: creating jemalloc.pc
[06:48:40][Step 2/2] config.status: creating doc/html.xsl
[06:48:40][Step 2/2] config.status: creating doc/manpages.xsl
[06:48:40][Step 2/2] config.status: creating doc/jemalloc.xml
[06:48:40][Step 2/2] config.status: creating include/jemalloc/jemalloc_macros.h
[06:48:40][Step 2/2] config.status: creating include/jemalloc/jemalloc_protos.h
[06:48:40][Step 2/2] config.status: creating include/jemalloc/jemalloc_typedefs.h
[06:48:40][Step 2/2] config.status: creating include/jemalloc/internal/jemalloc_internal.h
[06:48:40][Step 2/2] config.status: creating test/test.sh
[06:48:40][Step 2/2] config.status: creating test/include/test/jemalloc_test.h
[06:48:40][Step 2/2] config.status: creating config.stamp
[06:48:40][Step 2/2] config.status: creating bin/jemalloc-config
[06:48:40][Step 2/2] config.status: creating bin/jemalloc.sh
[06:48:40][Step 2/2] config.status: creating bin/jeprof
[06:48:40][Step 2/2] config.status: creating include/jemalloc/jemalloc_defs.h
[06:48:40][Step 2/2] config.status: creating include/jemalloc/internal/jemalloc_internal_defs.h
[06:48:40][Step 2/2] config.status: creating test/include/test/jemalloc_test_defs.h
[06:48:40][Step 2/2] config.status: executing include/jemalloc/internal/private_namespace.h commands
[06:48:40][Step 2/2] config.status: executing include/jemalloc/internal/private_unnamespace.h commands
[06:48:40][Step 2/2] config.status: executing include/jemalloc/internal/public_symbols.txt commands
[06:48:40][Step 2/2] config.status: executing include/jemalloc/internal/public_namespace.h commands
[06:48:40][Step 2/2] config.status: executing include/jemalloc/internal/public_unnamespace.h commands
[06:48:40][Step 2/2] config.status: executing include/jemalloc/internal/size_classes.h commands
[06:48:41][Step 2/2] (echo "// Code generated by goyacc. DO NOT EDIT."; \
[06:48:41][Step 2/2] echo "// GENERATED FILE DO NOT EDIT"; \
[06:48:41][Step 2/2] cat pkg/sql/parser/gen/sql.go.tmp | \
[06:48:41][Step 2/2] sed -E 's/^const ([A-Z][_A-Z0-9]*) =.*$/const \1 = lex.\1/g') > pkg/sql/parser/sql.go.tmp || rm pkg/sql/parser/sql.go.tmp
[06:48:41][Step 2/2] (echo "// Code generated by make. DO NOT EDIT."; \
[06:48:41][Step 2/2] echo "// GENERATED FILE DO NOT EDIT"; \
[06:48:41][Step 2/2] echo; \
[06:48:41][Step 2/2] echo "package lex"; \
[06:48:41][Step 2/2] echo; \
[06:48:41][Step 2/2] grep '^const [A-Z][_A-Z0-9]* ' pkg/sql/parser/gen/sql.go.tmp) > pkg/sql/lex/tokens.go.tmp || rm pkg/sql/lex/tokens.go.tmp
[06:48:41][Step 2/2] mv -f pkg/sql/lex/tokens.go.tmp pkg/sql/lex/tokens.go
[06:48:41][Step 2/2] mv -f pkg/sql/parser/sql.go.tmp pkg/sql/parser/sql.go
[06:48:41][Step 2/2] goimports -w pkg/sql/parser/sql.go
[06:48:41][Step 2/2] config.status: executing include/jemalloc/jemalloc_protos_jet.h commands
[06:48:41][Step 2/2] config.status: executing include/jemalloc/jemalloc_rename.h commands
[06:48:41][Step 2/2] config.status: executing include/jemalloc/jemalloc_mangle.h commands
[06:48:41][Step 2/2] config.status: executing include/jemalloc/jemalloc_mangle_jet.h commands
[06:48:41][Step 2/2] config.status: executing include/jemalloc/jemalloc.h commands
[06:48:41][Step 2/2] ===============================================================================
[06:48:41][Step 2/2] jemalloc version : 0.0.0-0-g0000000000000000000000000000000000000000
[06:48:41][Step 2/2] library revision : 2
[06:48:41][Step 2/2]
[06:48:41][Step 2/2] CONFIG : --enable-prof CFLAGS=-g1 LDFLAGS=
[06:48:41][Step 2/2] CC : cc
[06:48:41][Step 2/2] CONFIGURE_CFLAGS : -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops
[06:48:41][Step 2/2] SPECIFIED_CFLAGS : -g1
[06:48:41][Step 2/2] EXTRA_CFLAGS :
[06:48:41][Step 2/2] CPPFLAGS : -D_GNU_SOURCE -D_REENTRANT
[06:48:41][Step 2/2] LDFLAGS :
[06:48:41][Step 2/2] EXTRA_LDFLAGS :
[06:48:41][Step 2/2] LIBS : -lm -lgcc -lm -lpthread
[06:48:41][Step 2/2] RPATH_EXTRA :
[06:48:41][Step 2/2]
[06:48:41][Step 2/2] XSLTPROC : false
[06:48:41][Step 2/2] XSLROOT :
[06:48:41][Step 2/2]
[06:48:41][Step 2/2] PREFIX : /usr/local
[06:48:41][Step 2/2] BINDIR : /usr/local/bin
[06:48:41][Step 2/2] DATADIR : /usr/local/share
[06:48:41][Step 2/2] INCLUDEDIR : /usr/local/include
[06:48:41][Step 2/2] LIBDIR : /usr/local/lib
[06:48:41][Step 2/2] MANDIR : /usr/local/share/man
[06:48:41][Step 2/2]
[06:48:41][Step 2/2] srcroot : /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/
[06:48:41][Step 2/2] abs_srcroot : /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/
[06:48:41][Step 2/2] objroot :
[06:48:41][Step 2/2] abs_objroot : /go/native/x86_64-pc-linux-gnu/jemalloc/
[06:48:41][Step 2/2]
[06:48:41][Step 2/2] JEMALLOC_PREFIX :
[06:48:41][Step 2/2] JEMALLOC_PRIVATE_NAMESPACE
[06:48:41][Step 2/2] : je_
[06:48:41][Step 2/2] install_suffix :
[06:48:41][Step 2/2] malloc_conf :
[06:48:41][Step 2/2] autogen : 0
[06:48:41][Step 2/2] cc-silence : 1
[06:48:41][Step 2/2] debug : 0
[06:48:41][Step 2/2] code-coverage : 0
[06:48:41][Step 2/2] stats : 1
[06:48:41][Step 2/2] prof : 1
[06:48:41][Step 2/2] prof-libunwind : 0
[06:48:41][Step 2/2] prof-libgcc : 1
[06:48:41][Step 2/2] prof-gcc : 0
[06:48:41][Step 2/2] tcache : 1
[06:48:41][Step 2/2] thp : 1
[06:48:41][Step 2/2] fill : 1
[06:48:41][Step 2/2] utrace : 0
[06:48:41][Step 2/2] valgrind : 0
[06:48:41][Step 2/2] xmalloc : 0
[06:48:41][Step 2/2] munmap : 0
[06:48:41][Step 2/2] lazy_lock : 0
[06:48:41][Step 2/2] tls : 1
[06:48:41][Step 2/2] cache-oblivious : 1
[06:48:41][Step 2/2] ===============================================================================
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/jemalloc.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/jemalloc.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/arena.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/arena.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/atomic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/atomic.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/base.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/base.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/bitmap.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/bitmap.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/chunk.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/chunk.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/chunk_dss.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/chunk_dss.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/chunk_mmap.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/chunk_mmap.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/ckh.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/ckh.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/ctl.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/ctl.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/extent.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/extent.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/hash.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/hash.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/huge.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/huge.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/mb.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/mb.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/mutex.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/mutex.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/nstime.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/nstime.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/pages.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/pages.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/prng.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/prng.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/prof.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/prof.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/quarantine.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/quarantine.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/rtree.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/rtree.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/spin.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/spin.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/stats.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/stats.c
[06:48:41][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/tcache.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/tcache.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/ticker.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/ticker.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/tsd.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/tsd.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/util.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/util.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/jemalloc.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/jemalloc.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/witness.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/witness.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/arena.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/arena.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/atomic.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/atomic.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/bitmap.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/bitmap.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/base.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/base.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/chunk.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/chunk.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/chunk_dss.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/chunk_dss.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/chunk_mmap.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/chunk_mmap.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/ckh.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/ckh.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/ctl.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/ctl.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/extent.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/extent.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/hash.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/hash.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/huge.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/huge.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/mb.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/mb.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/mutex.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/mutex.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/nstime.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/nstime.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/pages.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/pages.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/prof.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/prof.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/prng.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/prng.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/quarantine.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/quarantine.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/rtree.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/rtree.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/stats.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/stats.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/spin.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/spin.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/tcache.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/tcache.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/ticker.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/ticker.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/tsd.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/tsd.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/util.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/util.c
[06:48:42][Step 2/2] cc -std=gnu11 -Wall -Werror=declaration-after-statement -Wshorten-64-to-32 -Wsign-compare -pipe -g3 -fvisibility=hidden -O3 -funroll-loops -g1 -fPIC -DPIC -c -D_GNU_SOURCE -D_REENTRANT -I/go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/include -Iinclude -o src/witness.pic.o /go/src/github.com/cockroachdb/cockroach/c-deps/jemalloc/src/witness.c
[06:48:42][Step 2/2] ar crus lib/libjemalloc.a src/jemalloc.o src/arena.o src/atomic.o src/base.o src/bitmap.o src/chunk.o src/chunk_dss.o src/chunk_mmap.o src/ckh.o src/ctl.o src/extent.o src/hash.o src/huge.o src/mb.o src/mutex.o src/nstime.o src/pages.o src/prng.o src/prof.o src/quarantine.o src/rtree.o src/stats.o src/spin.o src/tcache.o src/ticker.o src/tsd.o src/util.o src/witness.o
[06:48:42][Step 2/2] ar: `u' modifier ignored since `D' is the default (see `U')
[06:48:42][Step 2/2] ar crus lib/libjemalloc_pic.a src/jemalloc.pic.o src/arena.pic.o src/atomic.pic.o src/base.pic.o src/bitmap.pic.o src/chunk.pic.o src/chunk_dss.pic.o src/chunk_mmap.pic.o src/ckh.pic.o src/ctl.pic.o src/extent.pic.o src/hash.pic.o src/huge.pic.o src/mb.pic.o src/mutex.pic.o src/nstime.pic.o src/pages.pic.o src/prng.pic.o src/prof.pic.o src/quarantine.pic.o src/rtree.pic.o src/stats.pic.o src/spin.pic.o src/tcache.pic.o src/ticker.pic.o src/tsd.pic.o src/util.pic.o src/witness.pic.o
[06:48:42][Step 2/2] ar: `u' modifier ignored since `D' is the default (see `U')
[06:48:43][Step 2/2] sed -i '/import _/d' ./pkg/acceptance/cluster/testconfig.pb.go ./pkg/build/info.pb.go ./pkg/ccl/backupccl/backup.pb.go ./pkg/ccl/baseccl/encryption_options.pb.go ./pkg/ccl/storageccl/engineccl/enginepbccl/key_registry.pb.go ./pkg/ccl/storageccl/engineccl/enginepbccl/stats.pb.go ./pkg/ccl/utilccl/licenseccl/license.pb.go ./pkg/config/system.pb.go ./pkg/config/zone.pb.go ./pkg/gossip/gossip.pb.go ./pkg/internal/client/lease.pb.go ./pkg/jobs/jobspb/jobs.pb.go ./pkg/roachpb/api.pb.go ./pkg/roachpb/app_stats.pb.go ./pkg/roachpb/data.pb.go ./pkg/roachpb/errors.pb.go ./pkg/roachpb/internal.pb.go ./pkg/roachpb/internal_raft.pb.go ./pkg/roachpb/io-formats.pb.go ./pkg/roachpb/metadata.pb.go ./pkg/rpc/heartbeat.pb.go ./pkg/server/diagnosticspb/diagnostics.pb.go ./pkg/server/serverpb/admin.pb.go ./pkg/server/serverpb/authentication.pb.go ./pkg/server/serverpb/init.pb.go ./pkg/server/serverpb/status.pb.go ./pkg/server/status/statuspb/status.pb.go ./pkg/settings/cluster/cluster_version.pb.go ./pkg/sql/distsqlrun/api.pb.go ./pkg/sql/distsqlrun/data.pb.go ./pkg/sql/distsqlrun/processors.pb.go ./pkg/sql/distsqlrun/stats.pb.go ./pkg/sql/pgwire/pgerror/errors.pb.go ./pkg/sql/sqlbase/encoded_datum.pb.go ./pkg/sql/sqlbase/join_type.pb.go ./pkg/sql/sqlbase/privilege.pb.go ./pkg/sql/sqlbase/structured.pb.go ./pkg/sql/stats/histogram.pb.go ./pkg/storage/api.pb.go ./pkg/storage/closedts/ctpb/entry.pb.go ./pkg/storage/engine/enginepb/file_registry.pb.go ./pkg/storage/engine/enginepb/mvcc.pb.go ./pkg/storage/engine/enginepb/mvcc3.pb.go ./pkg/storage/engine/enginepb/rocksdb.pb.go ./pkg/storage/raft.pb.go ./pkg/storage/storagepb/lease_status.pb.go ./pkg/storage/storagepb/liveness.pb.go ./pkg/storage/storagepb/log.pb.go ./pkg/storage/storagepb/proposer_kv.pb.go ./pkg/storage/storagepb/state.pb.go ./pkg/ts/tspb/timeseries.pb.go ./pkg/util/hlc/legacy_timestamp.pb.go ./pkg/util/hlc/timestamp.pb.go ./pkg/util/log/log.pb.go ./pkg/util/metric/metric.pb.go ./pkg/util/protoutil/clone.pb.go ./pkg/util/tracing/recorded_span.pb.go ./pkg/util/unresolved_addr.pb.go
[06:48:44][Step 2/2] sed -i -E 's!import (fmt|math) "github.com/cockroachdb/cockroach/pkg/(fmt|math)"! !g' ./pkg/acceptance/cluster/testconfig.pb.go ./pkg/build/info.pb.go ./pkg/ccl/backupccl/backup.pb.go ./pkg/ccl/baseccl/encryption_options.pb.go ./pkg/ccl/storageccl/engineccl/enginepbccl/key_registry.pb.go ./pkg/ccl/storageccl/engineccl/enginepbccl/stats.pb.go ./pkg/ccl/utilccl/licenseccl/license.pb.go ./pkg/config/system.pb.go ./pkg/config/zone.pb.go ./pkg/gossip/gossip.pb.go ./pkg/internal/client/lease.pb.go ./pkg/jobs/jobspb/jobs.pb.go ./pkg/roachpb/api.pb.go ./pkg/roachpb/app_stats.pb.go ./pkg/roachpb/data.pb.go ./pkg/roachpb/errors.pb.go ./pkg/roachpb/internal.pb.go ./pkg/roachpb/internal_raft.pb.go ./pkg/roachpb/io-formats.pb.go ./pkg/roachpb/metadata.pb.go ./pkg/rpc/heartbeat.pb.go ./pkg/server/diagnosticspb/diagnostics.pb.go ./pkg/server/serverpb/admin.pb.go ./pkg/server/serverpb/authentication.pb.go ./pkg/server/serverpb/init.pb.go ./pkg/server/serverpb/status.pb.go ./pkg/server/status/statuspb/status.pb.go ./pkg/settings/cluster/cluster_version.pb.go ./pkg/sql/distsqlrun/api.pb.go ./pkg/sql/distsqlrun/data.pb.go ./pkg/sql/distsqlrun/processors.pb.go ./pkg/sql/distsqlrun/stats.pb.go ./pkg/sql/pgwire/pgerror/errors.pb.go ./pkg/sql/sqlbase/encoded_datum.pb.go ./pkg/sql/sqlbase/join_type.pb.go ./pkg/sql/sqlbase/privilege.pb.go ./pkg/sql/sqlbase/structured.pb.go ./pkg/sql/stats/histogram.pb.go ./pkg/storage/api.pb.go ./pkg/storage/closedts/ctpb/entry.pb.go ./pkg/storage/engine/enginepb/file_registry.pb.go ./pkg/storage/engine/enginepb/mvcc.pb.go ./pkg/storage/engine/enginepb/mvcc3.pb.go ./pkg/storage/engine/enginepb/rocksdb.pb.go ./pkg/storage/raft.pb.go ./pkg/storage/storagepb/lease_status.pb.go ./pkg/storage/storagepb/liveness.pb.go ./pkg/storage/storagepb/log.pb.go ./pkg/storage/storagepb/proposer_kv.pb.go ./pkg/storage/storagepb/state.pb.go ./pkg/ts/tspb/timeseries.pb.go ./pkg/util/hlc/legacy_timestamp.pb.go ./pkg/util/hlc/timestamp.pb.go ./pkg/util/log/log.pb.go ./pkg/util/metric/metric.pb.go ./pkg/util/protoutil/clone.pb.go ./pkg/util/tracing/recorded_span.pb.go ./pkg/util/unresolved_addr.pb.go
[06:48:44][Step 2/2] sed -i -E 's!github\.com/cockroachdb/cockroach/pkg/(etcd)!go.etcd.io/\1!g' ./pkg/acceptance/cluster/testconfig.pb.go ./pkg/build/info.pb.go ./pkg/ccl/backupccl/backup.pb.go ./pkg/ccl/baseccl/encryption_options.pb.go ./pkg/ccl/storageccl/engineccl/enginepbccl/key_registry.pb.go ./pkg/ccl/storageccl/engineccl/enginepbccl/stats.pb.go ./pkg/ccl/utilccl/licenseccl/license.pb.go ./pkg/config/system.pb.go ./pkg/config/zone.pb.go ./pkg/gossip/gossip.pb.go ./pkg/internal/client/lease.pb.go ./pkg/jobs/jobspb/jobs.pb.go ./pkg/roachpb/api.pb.go ./pkg/roachpb/app_stats.pb.go ./pkg/roachpb/data.pb.go ./pkg/roachpb/errors.pb.go ./pkg/roachpb/internal.pb.go ./pkg/roachpb/internal_raft.pb.go ./pkg/roachpb/io-formats.pb.go ./pkg/roachpb/metadata.pb.go ./pkg/rpc/heartbeat.pb.go ./pkg/server/diagnosticspb/diagnostics.pb.go ./pkg/server/serverpb/admin.pb.go ./pkg/server/serverpb/authentication.pb.go ./pkg/server/serverpb/init.pb.go ./pkg/server/serverpb/status.pb.go ./pkg/server/status/statuspb/status.pb.go ./pkg/settings/cluster/cluster_version.pb.go ./pkg/sql/distsqlrun/api.pb.go ./pkg/sql/distsqlrun/data.pb.go ./pkg/sql/distsqlrun/processors.pb.go ./pkg/sql/distsqlrun/stats.pb.go ./pkg/sql/pgwire/pgerror/errors.pb.go ./pkg/sql/sqlbase/encoded_datum.pb.go ./pkg/sql/sqlbase/join_type.pb.go ./pkg/sql/sqlbase/privilege.pb.go ./pkg/sql/sqlbase/structured.pb.go ./pkg/sql/stats/histogram.pb.go ./pkg/storage/api.pb.go ./pkg/storage/closedts/ctpb/entry.pb.go ./pkg/storage/engine/enginepb/file_registry.pb.go ./pkg/storage/engine/enginepb/mvcc.pb.go ./pkg/storage/engine/enginepb/mvcc3.pb.go ./pkg/storage/engine/enginepb/rocksdb.pb.go ./pkg/storage/raft.pb.go ./pkg/storage/storagepb/lease_status.pb.go ./pkg/storage/storagepb/liveness.pb.go ./pkg/storage/storagepb/log.pb.go ./pkg/storage/storagepb/proposer_kv.pb.go ./pkg/storage/storagepb/state.pb.go ./pkg/ts/tspb/timeseries.pb.go ./pkg/util/hlc/legacy_timestamp.pb.go ./pkg/util/hlc/timestamp.pb.go ./pkg/util/log/log.pb.go ./pkg/util/metric/metric.pb.go ./pkg/util/protoutil/clone.pb.go ./pkg/util/tracing/recorded_span.pb.go ./pkg/util/unresolved_addr.pb.go
[06:48:44][Step 2/2] sed -i -E 's!cockroachdb/cockroach/pkg/(prometheus/client_model)!\1/go!g' ./pkg/acceptance/cluster/testconfig.pb.go ./pkg/build/info.pb.go ./pkg/ccl/backupccl/backup.pb.go ./pkg/ccl/baseccl/encryption_options.pb.go ./pkg/ccl/storageccl/engineccl/enginepbccl/key_registry.pb.go ./pkg/ccl/storageccl/engineccl/enginepbccl/stats.pb.go ./pkg/ccl/utilccl/licenseccl/license.pb.go ./pkg/config/system.pb.go ./pkg/config/zone.pb.go ./pkg/gossip/gossip.pb.go ./pkg/internal/client/lease.pb.go ./pkg/jobs/jobspb/jobs.pb.go ./pkg/roachpb/api.pb.go ./pkg/roachpb/app_stats.pb.go ./pkg/roachpb/data.pb.go ./pkg/roachpb/errors.pb.go ./pkg/roachpb/internal.pb.go ./pkg/roachpb/internal_raft.pb.go ./pkg/roachpb/io-formats.pb.go ./pkg/roachpb/metadata.pb.go ./pkg/rpc/heartbeat.pb.go ./pkg/server/diagnosticspb/diagnostics.pb.go ./pkg/server/serverpb/admin.pb.go ./pkg/server/serverpb/authentication.pb.go ./pkg/server/serverpb/init.pb.go ./pkg/server/serverpb/status.pb.go ./pkg/server/status/statuspb/status.pb.go ./pkg/settings/cluster/cluster_version.pb.go ./pkg/sql/distsqlrun/api.pb.go ./pkg/sql/distsqlrun/data.pb.go ./pkg/sql/distsqlrun/processors.pb.go ./pkg/sql/distsqlrun/stats.pb.go ./pkg/sql/pgwire/pgerror/errors.pb.go ./pkg/sql/sqlbase/encoded_datum.pb.go ./pkg/sql/sqlbase/join_type.pb.go ./pkg/sql/sqlbase/privilege.pb.go ./pkg/sql/sqlbase/structured.pb.go ./pkg/sql/stats/histogram.pb.go ./pkg/storage/api.pb.go ./pkg/storage/closedts/ctpb/entry.pb.go ./pkg/storage/engine/enginepb/file_registry.pb.go ./pkg/storage/engine/enginepb/mvcc.pb.go ./pkg/storage/engine/enginepb/mvcc3.pb.go ./pkg/storage/engine/enginepb/rocksdb.pb.go ./pkg/storage/raft.pb.go ./pkg/storage/storagepb/lease_status.pb.go ./pkg/storage/storagepb/liveness.pb.go ./pkg/storage/storagepb/log.pb.go ./pkg/storage/storagepb/proposer_kv.pb.go ./pkg/storage/storagepb/state.pb.go ./pkg/ts/tspb/timeseries.pb.go ./pkg/util/hlc/legacy_timestamp.pb.go ./pkg/util/hlc/timestamp.pb.go ./pkg/util/log/log.pb.go ./pkg/util/metric/metric.pb.go ./pkg/util/protoutil/clone.pb.go ./pkg/util/tracing/recorded_span.pb.go ./pkg/util/unresolved_addr.pb.go
[06:48:45][Step 2/2] sed -i -E 's!github.com/cockroachdb/cockroach/pkg/(bytes|encoding/binary|errors|fmt|io|math|github\.com|(google\.)?golang\.org)!\1!g' ./pkg/acceptance/cluster/testconfig.pb.go ./pkg/build/info.pb.go ./pkg/ccl/backupccl/backup.pb.go ./pkg/ccl/baseccl/encryption_options.pb.go ./pkg/ccl/storageccl/engineccl/enginepbccl/key_registry.pb.go ./pkg/ccl/storageccl/engineccl/enginepbccl/stats.pb.go ./pkg/ccl/utilccl/licenseccl/license.pb.go ./pkg/config/system.pb.go ./pkg/config/zone.pb.go ./pkg/gossip/gossip.pb.go ./pkg/internal/client/lease.pb.go ./pkg/jobs/jobspb/jobs.pb.go ./pkg/roachpb/api.pb.go ./pkg/roachpb/app_stats.pb.go ./pkg/roachpb/data.pb.go ./pkg/roachpb/errors.pb.go ./pkg/roachpb/internal.pb.go ./pkg/roachpb/internal_raft.pb.go ./pkg/roachpb/io-formats.pb.go ./pkg/roachpb/metadata.pb.go ./pkg/rpc/heartbeat.pb.go ./pkg/server/diagnosticspb/diagnostics.pb.go ./pkg/server/serverpb/admin.pb.go ./pkg/server/serverpb/authentication.pb.go ./pkg/server/serverpb/init.pb.go ./pkg/server/serverpb/status.pb.go ./pkg/server/status/statuspb/status.pb.go ./pkg/settings/cluster/cluster_version.pb.go ./pkg/sql/distsqlrun/api.pb.go ./pkg/sql/distsqlrun/data.pb.go ./pkg/sql/distsqlrun/processors.pb.go ./pkg/sql/distsqlrun/stats.pb.go ./pkg/sql/pgwire/pgerror/errors.pb.go ./pkg/sql/sqlbase/encoded_datum.pb.go ./pkg/sql/sqlbase/join_type.pb.go ./pkg/sql/sqlbase/privilege.pb.go ./pkg/sql/sqlbase/structured.pb.go ./pkg/sql/stats/histogram.pb.go ./pkg/storage/api.pb.go ./pkg/storage/closedts/ctpb/entry.pb.go ./pkg/storage/engine/enginepb/file_registry.pb.go ./pkg/storage/engine/enginepb/mvcc.pb.go ./pkg/storage/engine/enginepb/mvcc3.pb.go ./pkg/storage/engine/enginepb/rocksdb.pb.go ./pkg/storage/raft.pb.go ./pkg/storage/storagepb/lease_status.pb.go ./pkg/storage/storagepb/liveness.pb.go ./pkg/storage/storagepb/log.pb.go ./pkg/storage/storagepb/proposer_kv.pb.go ./pkg/storage/storagepb/state.pb.go ./pkg/ts/tspb/timeseries.pb.go ./pkg/util/hlc/legacy_timestamp.pb.go ./pkg/util/hlc/timestamp.pb.go ./pkg/util/log/log.pb.go ./pkg/util/metric/metric.pb.go ./pkg/util/protoutil/clone.pb.go ./pkg/util/tracing/recorded_span.pb.go ./pkg/util/unresolved_addr.pb.go
[06:48:45][Step 2/2] sed -i -E 's!golang.org/x/net/context!context!g' ./pkg/acceptance/cluster/testconfig.pb.go ./pkg/build/info.pb.go ./pkg/ccl/backupccl/backup.pb.go ./pkg/ccl/baseccl/encryption_options.pb.go ./pkg/ccl/storageccl/engineccl/enginepbccl/key_registry.pb.go ./pkg/ccl/storageccl/engineccl/enginepbccl/stats.pb.go ./pkg/ccl/utilccl/licenseccl/license.pb.go ./pkg/config/system.pb.go ./pkg/config/zone.pb.go ./pkg/gossip/gossip.pb.go ./pkg/internal/client/lease.pb.go ./pkg/jobs/jobspb/jobs.pb.go ./pkg/roachpb/api.pb.go ./pkg/roachpb/app_stats.pb.go ./pkg/roachpb/data.pb.go ./pkg/roachpb/errors.pb.go ./pkg/roachpb/internal.pb.go ./pkg/roachpb/internal_raft.pb.go ./pkg/roachpb/io-formats.pb.go ./pkg/roachpb/metadata.pb.go ./pkg/rpc/heartbeat.pb.go ./pkg/server/diagnosticspb/diagnostics.pb.go ./pkg/server/serverpb/admin.pb.go ./pkg/server/serverpb/authentication.pb.go ./pkg/server/serverpb/init.pb.go ./pkg/server/serverpb/status.pb.go ./pkg/server/status/statuspb/status.pb.go ./pkg/settings/cluster/cluster_version.pb.go ./pkg/sql/distsqlrun/api.pb.go ./pkg/sql/distsqlrun/data.pb.go ./pkg/sql/distsqlrun/processors.pb.go ./pkg/sql/distsqlrun/stats.pb.go ./pkg/sql/pgwire/pgerror/errors.pb.go ./pkg/sql/sqlbase/encoded_datum.pb.go ./pkg/sql/sqlbase/join_type.pb.go ./pkg/sql/sqlbase/privilege.pb.go ./pkg/sql/sqlbase/structured.pb.go ./pkg/sql/stats/histogram.pb.go ./pkg/storage/api.pb.go ./pkg/storage/closedts/ctpb/entry.pb.go ./pkg/storage/engine/enginepb/file_registry.pb.go ./pkg/storage/engine/enginepb/mvcc.pb.go ./pkg/storage/engine/enginepb/mvcc3.pb.go ./pkg/storage/engine/enginepb/rocksdb.pb.go ./pkg/storage/raft.pb.go ./pkg/storage/storagepb/lease_status.pb.go ./pkg/storage/storagepb/liveness.pb.go ./pkg/storage/storagepb/log.pb.go ./pkg/storage/storagepb/proposer_kv.pb.go ./pkg/storage/storagepb/state.pb.go ./pkg/ts/tspb/timeseries.pb.go ./pkg/util/hlc/legacy_timestamp.pb.go ./pkg/util/hlc/timestamp.pb.go ./pkg/util/log/log.pb.go ./pkg/util/metric/metric.pb.go ./pkg/util/protoutil/clone.pb.go ./pkg/util/tracing/recorded_span.pb.go ./pkg/util/unresolved_addr.pb.go
[06:48:45][Step 2/2] gofmt -s -w ./pkg/acceptance/cluster/testconfig.pb.go ./pkg/build/info.pb.go ./pkg/ccl/backupccl/backup.pb.go ./pkg/ccl/baseccl/encryption_options.pb.go ./pkg/ccl/storageccl/engineccl/enginepbccl/key_registry.pb.go ./pkg/ccl/storageccl/engineccl/enginepbccl/stats.pb.go ./pkg/ccl/utilccl/licenseccl/license.pb.go ./pkg/config/system.pb.go ./pkg/config/zone.pb.go ./pkg/gossip/gossip.pb.go ./pkg/internal/client/lease.pb.go ./pkg/jobs/jobspb/jobs.pb.go ./pkg/roachpb/api.pb.go ./pkg/roachpb/app_stats.pb.go ./pkg/roachpb/data.pb.go ./pkg/roachpb/errors.pb.go ./pkg/roachpb/internal.pb.go ./pkg/roachpb/internal_raft.pb.go ./pkg/roachpb/io-formats.pb.go ./pkg/roachpb/metadata.pb.go ./pkg/rpc/heartbeat.pb.go ./pkg/server/diagnosticspb/diagnostics.pb.go ./pkg/server/serverpb/admin.pb.go ./pkg/server/serverpb/authentication.pb.go ./pkg/server/serverpb/init.pb.go ./pkg/server/serverpb/status.pb.go ./pkg/server/status/statuspb/status.pb.go ./pkg/settings/cluster/cluster_version.pb.go ./pkg/sql/distsqlrun/api.pb.go ./pkg/sql/distsqlrun/data.pb.go ./pkg/sql/distsqlrun/processors.pb.go ./pkg/sql/distsqlrun/stats.pb.go ./pkg/sql/pgwire/pgerror/errors.pb.go ./pkg/sql/sqlbase/encoded_datum.pb.go ./pkg/sql/sqlbase/join_type.pb.go ./pkg/sql/sqlbase/privilege.pb.go ./pkg/sql/sqlbase/structured.pb.go ./pkg/sql/stats/histogram.pb.go ./pkg/storage/api.pb.go ./pkg/storage/closedts/ctpb/entry.pb.go ./pkg/storage/engine/enginepb/file_registry.pb.go ./pkg/storage/engine/enginepb/mvcc.pb.go ./pkg/storage/engine/enginepb/mvcc3.pb.go ./pkg/storage/engine/enginepb/rocksdb.pb.go ./pkg/storage/raft.pb.go ./pkg/storage/storagepb/lease_status.pb.go ./pkg/storage/storagepb/liveness.pb.go ./pkg/storage/storagepb/log.pb.go ./pkg/storage/storagepb/proposer_kv.pb.go ./pkg/storage/storagepb/state.pb.go ./pkg/ts/tspb/timeseries.pb.go ./pkg/util/hlc/legacy_timestamp.pb.go ./pkg/util/hlc/timestamp.pb.go ./pkg/util/log/log.pb.go ./pkg/util/metric/metric.pb.go ./pkg/util/protoutil/clone.pb.go ./pkg/util/tracing/recorded_span.pb.go ./pkg/util/unresolved_addr.pb.go
[06:48:46][Step 2/2] touch bin/.go_protobuf_sources
[06:48:46][Step 2/2] go test -race -exec 'stress -maxruns 100 -maxfails 1 -stderr -p 4' -tags ' make x86_64_pc_linux_gnu' -ldflags '-X github.com/cockroachdb/cockroach/pkg/build.typ=development -extldflags "" -X "github.com/cockroachdb/cockroach/pkg/build.tag=v2.2.0-alpha.00000000-1771-g310a049" -X "github.com/cockroachdb/cockroach/pkg/build.rev=310a04983cda8ab8d67cd401814341b9b7f8ce79" -X "github.com/cockroachdb/cockroach/pkg/build.cgoTargetTriple=x86_64-pc-linux-gnu" ' -run "." -timeout 0 github.com/cockroachdb/cockroach/pkg/storage -v -args -test.timeout 40m
[06:49:51][Step 2/2] 0 runs so far, 0 failures, over 5s
[06:49:56][Step 2/2] 0 runs so far, 0 failures, over 10s
[06:50:01][Step 2/2] 0 runs so far, 0 failures, over 15s
[06:50:06][Step 2/2] 0 runs so far, 0 failures, over 20s
[06:50:11][Step 2/2] 0 runs so far, 0 failures, over 25s
[06:50:16][Step 2/2] 0 runs so far, 0 failures, over 30s
[06:50:21][Step 2/2] 0 runs so far, 0 failures, over 35s
[06:50:26][Step 2/2] 0 runs so far, 0 failures, over 40s
[06:50:31][Step 2/2] 0 runs so far, 0 failures, over 45s
[06:50:36][Step 2/2] 0 runs so far, 0 failures, over 50s
[06:50:41][Step 2/2] 0 runs so far, 0 failures, over 55s
[06:50:46][Step 2/2] 0 runs so far, 0 failures, over 1m0s
[06:50:51][Step 2/2] 0 runs so far, 0 failures, over 1m5s
[06:50:56][Step 2/2] 0 runs so far, 0 failures, over 1m10s
[06:51:01][Step 2/2] 0 runs so far, 0 failures, over 1m15s
[06:51:06][Step 2/2] 0 runs so far, 0 failures, over 1m20s
[06:51:11][Step 2/2] 0 runs so far, 0 failures, over 1m25s
[06:51:16][Step 2/2] 0 runs so far, 0 failures, over 1m30s
[06:51:21][Step 2/2] 0 runs so far, 0 failures, over 1m35s
[06:51:26][Step 2/2] 0 runs so far, 0 failures, over 1m40s
[06:51:31][Step 2/2] 0 runs so far, 0 failures, over 1m45s
[06:51:36][Step 2/2] 0 runs so far, 0 failures, over 1m50s
[06:51:41][Step 2/2] 0 runs so far, 0 failures, over 1m55s
[06:51:46][Step 2/2] 0 runs so far, 0 failures, over 2m0s
[06:51:51][Step 2/2] 0 runs so far, 0 failures, over 2m5s
[06:51:56][Step 2/2] 0 runs so far, 0 failures, over 2m10s
[06:52:01][Step 2/2] 0 runs so far, 0 failures, over 2m15s
[06:52:06][Step 2/2] 0 runs so far, 0 failures, over 2m20s
[06:52:11][Step 2/2] 0 runs so far, 0 failures, over 2m25s
[06:52:16][Step 2/2] 0 runs so far, 0 failures, over 2m30s
[06:52:21][Step 2/2] 0 runs so far, 0 failures, over 2m35s
[06:52:26][Step 2/2] 0 runs so far, 0 failures, over 2m40s
[06:52:31][Step 2/2] 0 runs so far, 0 failures, over 2m45s
[06:52:36][Step 2/2] 0 runs so far, 0 failures, over 2m50s
[06:52:41][Step 2/2] 0 runs so far, 0 failures, over 2m55s
[06:52:46][Step 2/2] 0 runs so far, 0 failures, over 3m0s
[06:52:51][Step 2/2] 0 runs so far, 0 failures, over 3m5s
[06:52:56][Step 2/2] 0 runs so far, 0 failures, over 3m10s
[06:53:01][Step 2/2] 0 runs so far, 0 failures, over 3m15s
[06:53:06][Step 2/2] 0 runs so far, 0 failures, over 3m20s
[06:53:11][Step 2/2] 0 runs so far, 0 failures, over 3m25s
[06:53:16][Step 2/2] 0 runs so far, 0 failures, over 3m30s
[06:53:21][Step 2/2] 0 runs so far, 0 failures, over 3m35s
[06:53:26][Step 2/2] 0 runs so far, 0 failures, over 3m40s
[06:53:31][Step 2/2] 0 runs so far, 0 failures, over 3m45s
[06:53:36][Step 2/2] 0 runs so far, 0 failures, over 3m50s
[06:53:41][Step 2/2] 0 runs so far, 0 failures, over 3m55s
[06:53:46][Step 2/2] 0 runs so far, 0 failures, over 4m0s
[06:53:51][Step 2/2] 0 runs so far, 0 failures, over 4m5s
[06:53:56][Step 2/2] 0 runs so far, 0 failures, over 4m10s
[06:54:01][Step 2/2] 0 runs so far, 0 failures, over 4m15s
[06:54:06][Step 2/2] 0 runs so far, 0 failures, over 4m20s
[06:54:11][Step 2/2] 0 runs so far, 0 failures, over 4m25s
[06:54:16][Step 2/2] 0 runs so far, 0 failures, over 4m30s
[06:54:21][Step 2/2] 0 runs so far, 0 failures, over 4m35s
[06:54:26][Step 2/2] 0 runs so far, 0 failures, over 4m40s
[06:54:31][Step 2/2] 0 runs so far, 0 failures, over 4m45s
[06:54:36][Step 2/2] 0 runs so far, 0 failures, over 4m50s
[06:54:41][Step 2/2] 0 runs so far, 0 failures, over 4m55s
[06:54:46][Step 2/2] 0 runs so far, 0 failures, over 5m0s
[06:54:51][Step 2/2] 0 runs so far, 0 failures, over 5m5s
[06:54:56][Step 2/2] 0 runs so far, 0 failures, over 5m10s
[06:55:01][Step 2/2] 0 runs so far, 0 failures, over 5m15s
[06:55:06][Step 2/2] 0 runs so far, 0 failures, over 5m20s
[06:55:11][Step 2/2] 0 runs so far, 0 failures, over 5m25s
[06:55:16][Step 2/2] 0 runs so far, 0 failures, over 5m30s
[06:55:21][Step 2/2] 0 runs so far, 0 failures, over 5m35s
[06:55:26][Step 2/2] 0 runs so far, 0 failures, over 5m40s
[06:55:31][Step 2/2] 0 runs so far, 0 failures, over 5m45s
[06:55:36][Step 2/2] 0 runs so far, 0 failures, over 5m50s
[06:55:41][Step 2/2] 0 runs so far, 0 failures, over 5m55s
[06:55:46][Step 2/2] 0 runs so far, 0 failures, over 6m0s
[06:55:51][Step 2/2] 0 runs so far, 0 failures, over 6m5s
[06:55:56][Step 2/2] 0 runs so far, 0 failures, over 6m10s
[06:56:01][Step 2/2] 1 runs so far, 0 failures, over 6m15s
[06:56:06][Step 2/2] 1 runs so far, 0 failures, over 6m20s
[06:56:11][Step 2/2] 4 runs so far, 0 failures, over 6m25s
[06:56:16][Step 2/2] 4 runs so far, 0 failures, over 6m30s
[06:56:21][Step 2/2] 4 runs so far, 0 failures, over 6m35s
[06:56:26][Step 2/2] 4 runs so far, 0 failures, over 6m40s
[06:56:31][Step 2/2] 4 runs so far, 0 failures, over 6m45s
[06:56:36][Step 2/2] 4 runs so far, 0 failures, over 6m50s
[06:56:41][Step 2/2] 4 runs so far, 0 failures, over 6m55s
[06:56:46][Step 2/2] 4 runs so far, 0 failures, over 7m0s
[06:56:51][Step 2/2] 4 runs so far, 0 failures, over 7m5s
[06:56:56][Step 2/2] 4 runs so far, 0 failures, over 7m10s
[06:57:01][Step 2/2] 4 runs so far, 0 failures, over 7m15s
[06:57:06][Step 2/2] 4 runs so far, 0 failures, over 7m20s
[06:57:11][Step 2/2] 4 runs so far, 0 failures, over 7m25s
[06:57:16][Step 2/2] 4 runs so far, 0 failures, over 7m30s
[06:57:21][Step 2/2] 4 runs so far, 0 failures, over 7m35s
[06:57:26][Step 2/2] 4 runs so far, 0 failures, over 7m40s
[06:57:31][Step 2/2] 4 runs so far, 0 failures, over 7m45s
[06:57:36][Step 2/2] 4 runs so far, 0 failures, over 7m50s
[06:57:41][Step 2/2] 4 runs so far, 0 failures, over 7m55s
[06:57:46][Step 2/2] 4 runs so far, 0 failures, over 8m0s
[06:57:51][Step 2/2] 4 runs so far, 0 failures, over 8m5s
[06:57:56][Step 2/2] 4 runs so far, 0 failures, over 8m10s
[06:58:01][Step 2/2] 4 runs so far, 0 failures, over 8m15s
[06:58:06][Step 2/2] 4 runs so far, 0 failures, over 8m20s
[06:58:11][Step 2/2] 4 runs so far, 0 failures, over 8m25s
[06:58:16][Step 2/2] 4 runs so far, 0 failures, over 8m30s
[06:58:21][Step 2/2] 4 runs so far, 0 failures, over 8m35s
[06:58:26][Step 2/2] 4 runs so far, 0 failures, over 8m40s
[06:58:31][Step 2/2] 4 runs so far, 0 failures, over 8m45s
[06:58:36][Step 2/2] 4 runs so far, 0 failures, over 8m50s
[06:58:41][Step 2/2] 4 runs so far, 0 failures, over 8m55s
[06:58:46][Step 2/2] 4 runs so far, 0 failures, over 9m0s
[06:58:51][Step 2/2] 4 runs so far, 0 failures, over 9m5s
[06:58:56][Step 2/2] 4 runs so far, 0 failures, over 9m10s
[06:59:01][Step 2/2] 4 runs so far, 0 failures, over 9m15s
[06:59:06][Step 2/2] 4 runs so far, 0 failures, over 9m20s
[06:59:11][Step 2/2] 4 runs so far, 0 failures, over 9m25s
[06:59:16][Step 2/2] 4 runs so far, 0 failures, over 9m30s
[06:59:21][Step 2/2] 4 runs so far, 0 failures, over 9m35s
[06:59:26][Step 2/2] 4 runs so far, 0 failures, over 9m40s
[06:59:31][Step 2/2] 4 runs so far, 0 failures, over 9m45s
[06:59:36][Step 2/2] 4 runs so far, 0 failures, over 9m50s
[06:59:41][Step 2/2] 4 runs so far, 0 failures, over 9m55s
[06:59:46][Step 2/2] 4 runs so far, 0 failures, over 10m0s
[06:59:51][Step 2/2] 4 runs so far, 0 failures, over 10m5s
[06:59:56][Step 2/2] 4 runs so far, 0 failures, over 10m10s
[07:00:01][Step 2/2] 4 runs so far, 0 failures, over 10m15s
[07:00:06][Step 2/2] 4 runs so far, 0 failures, over 10m20s
[07:00:11][Step 2/2] 4 runs so far, 0 failures, over 10m25s
[07:00:16][Step 2/2] 4 runs so far, 0 failures, over 10m30s
[07:00:21][Step 2/2] 4 runs so far, 0 failures, over 10m35s
[07:00:26][Step 2/2] 4 runs so far, 0 failures, over 10m40s
[07:00:31][Step 2/2] 4 runs so far, 0 failures, over 10m45s
[07:00:36][Step 2/2] 4 runs so far, 0 failures, over 10m50s
[07:00:41][Step 2/2] 4 runs so far, 0 failures, over 10m55s
[07:00:46][Step 2/2] 4 runs so far, 0 failures, over 11m0s
[07:00:51][Step 2/2] 4 runs so far, 0 failures, over 11m5s
[07:00:56][Step 2/2] 4 runs so far, 0 failures, over 11m10s
[07:01:01][Step 2/2] 4 runs so far, 0 failures, over 11m15s
[07:01:06][Step 2/2] 4 runs so far, 0 failures, over 11m20s
[07:01:11][Step 2/2] 4 runs so far, 0 failures, over 11m25s
[07:01:16][Step 2/2] 4 runs so far, 0 failures, over 11m30s
[07:01:21][Step 2/2] 4 runs so far, 0 failures, over 11m35s
[07:01:26][Step 2/2] 4 runs so far, 0 failures, over 11m40s
[07:01:31][Step 2/2] 4 runs so far, 0 failures, over 11m45s
[07:01:36][Step 2/2] 4 runs so far, 0 failures, over 11m50s
[07:01:41][Step 2/2] 4 runs so far, 0 failures, over 11m55s
[07:01:46][Step 2/2] 4 runs so far, 0 failures, over 12m0s
[07:01:51][Step 2/2] 4 runs so far, 0 failures, over 12m5s
[07:01:56][Step 2/2] 4 runs so far, 0 failures, over 12m10s
[07:02:01][Step 2/2] 4 runs so far, 0 failures, over 12m15s
[07:02:06][Step 2/2] 4 runs so far, 0 failures, over 12m20s
[07:02:11][Step 2/2] 5 runs so far, 0 failures, over 12m25s
[07:02:16][Step 2/2] 5 runs so far, 0 failures, over 12m30s
[07:02:21][Step 2/2] 5 runs so far, 0 failures, over 12m35s
[07:02:26][Step 2/2] 5 runs so far, 0 failures, over 12m40s
[07:02:31][Step 2/2] 8 runs so far, 0 failures, over 12m45s
[07:02:36][Step 2/2] 8 runs so far, 0 failures, over 12m50s
[07:02:41][Step 2/2] 8 runs so far, 0 failures, over 12m55s
[07:02:46][Step 2/2] 8 runs so far, 0 failures, over 13m0s
[07:02:51][Step 2/2] 8 runs so far, 0 failures, over 13m5s
[07:02:56][Step 2/2] 8 runs so far, 0 failures, over 13m10s
[07:03:01][Step 2/2] 8 runs so far, 0 failures, over 13m15s
[07:03:06][Step 2/2] 8 runs so far, 0 failures, over 13m20s
[07:03:11][Step 2/2] 8 runs so far, 0 failures, over 13m25s
[07:03:16][Step 2/2] 8 runs so far, 0 failures, over 13m30s
[07:03:21][Step 2/2] 8 runs so far, 0 failures, over 13m35s
[07:03:26][Step 2/2] 8 runs so far, 0 failures, over 13m40s
[07:03:31][Step 2/2] 8 runs so far, 0 failures, over 13m45s
[07:03:36][Step 2/2] 8 runs so far, 0 failures, over 13m50s
[07:03:41][Step 2/2] 8 runs so far, 0 failures, over 13m55s
[07:03:46][Step 2/2] 8 runs so far, 0 failures, over 14m0s
[07:03:51][Step 2/2] 8 runs so far, 0 failures, over 14m5s
[07:03:56][Step 2/2] 8 runs so far, 0 failures, over 14m10s
[07:04:01][Step 2/2] 8 runs so far, 0 failures, over 14m15s
[07:04:06][Step 2/2] 8 runs so far, 0 failures, over 14m20s
[07:04:11][Step 2/2] 8 runs so far, 0 failures, over 14m25s
[07:04:16][Step 2/2] 8 runs so far, 0 failures, over 14m30s
[07:04:21][Step 2/2] 8 runs so far, 0 failures, over 14m35s
[07:04:26][Step 2/2] 8 runs so far, 0 failures, over 14m40s
[07:04:31][Step 2/2] 8 runs so far, 0 failures, over 14m45s
[07:04:36][Step 2/2] 8 runs so far, 0 failures, over 14m50s
[07:04:41][Step 2/2] 8 runs so far, 0 failures, over 14m55s
[07:04:46][Step 2/2] 8 runs so far, 0 failures, over 15m0s
[07:04:51][Step 2/2] 8 runs so far, 0 failures, over 15m5s
[07:04:56][Step 2/2] 8 runs so far, 0 failures, over 15m10s
[07:05:01][Step 2/2] 8 runs so far, 0 failures, over 15m15s
[07:05:06][Step 2/2] 8 runs so far, 0 failures, over 15m20s
[07:05:11][Step 2/2] 8 runs so far, 0 failures, over 15m25s
[07:05:16][Step 2/2] 8 runs so far, 0 failures, over 15m30s
[07:05:21][Step 2/2] 8 runs so far, 0 failures, over 15m35s
[07:05:26][Step 2/2] 8 runs so far, 0 failures, over 15m40s
[07:05:31][Step 2/2] 8 runs so far, 0 failures, over 15m45s
[07:05:36][Step 2/2] 8 runs so far, 0 failures, over 15m50s
[07:05:41][Step 2/2] 8 runs so far, 0 failures, over 15m55s
[07:05:46][Step 2/2] 8 runs so far, 0 failures, over 16m0s
[07:05:51][Step 2/2] 8 runs so far, 0 failures, over 16m5s
[07:05:56][Step 2/2] 8 runs so far, 0 failures, over 16m10s
[07:06:01][Step 2/2] 8 runs so far, 0 failures, over 16m15s
[07:06:06][Step 2/2] 8 runs so far, 0 failures, over 16m20s
[07:06:11][Step 2/2] 8 runs so far, 0 failures, over 16m25s
[07:06:16][Step 2/2] 8 runs so far, 0 failures, over 16m30s
[07:06:21][Step 2/2] 8 runs so far, 0 failures, over 16m35s
[07:06:26][Step 2/2] 8 runs so far, 0 failures, over 16m40s
[07:06:31][Step 2/2] 8 runs so far, 0 failures, over 16m45s
[07:06:36][Step 2/2] 8 runs so far, 0 failures, over 16m50s
[07:06:41][Step 2/2] 8 runs so far, 0 failures, over 16m55s
[07:06:46][Step 2/2] 8 runs so far, 0 failures, over 17m0s
[07:06:51][Step 2/2] 8 runs so far, 0 failures, over 17m5s
[07:06:56][Step 2/2] 8 runs so far, 0 failures, over 17m10s
[07:07:01][Step 2/2] 8 runs so far, 0 failures, over 17m15s
[07:07:06][Step 2/2] 8 runs so far, 0 failures, over 17m20s
[07:07:11][Step 2/2] 8 runs so far, 0 failures, over 17m25s
[07:07:16][Step 2/2] 8 runs so far, 0 failures, over 17m30s
[07:07:21][Step 2/2] 8 runs so far, 0 failures, over 17m35s
[07:07:26][Step 2/2] 8 runs so far, 0 failures, over 17m40s
[07:07:31][Step 2/2] 8 runs so far, 0 failures, over 17m45s
[07:07:36][Step 2/2] 8 runs so far, 0 failures, over 17m50s
[07:07:41][Step 2/2] 8 runs so far, 0 failures, over 17m55s
[07:07:46][Step 2/2] 8 runs so far, 0 failures, over 18m0s
[07:07:51][Step 2/2] 8 runs so far, 0 failures, over 18m5s
[07:07:56][Step 2/2] 8 runs so far, 0 failures, over 18m10s
[07:08:01][Step 2/2] 8 runs so far, 0 failures, over 18m15s
[07:08:06][Step 2/2] 9 runs so far, 0 failures, over 18m20s
[07:08:11][Step 2/2] 9 runs so far, 0 failures, over 18m25s
[07:08:16][Step 2/2] 9 runs so far, 0 failures, over 18m30s
[07:08:21][Step 2/2] 9 runs so far, 0 failures, over 18m35s
[07:08:26][Step 2/2] 9 runs so far, 0 failures, over 18m40s
[07:08:31][Step 2/2] 9 runs so far, 0 failures, over 18m45s
[07:08:36][Step 2/2] 9 runs so far, 0 failures, over 18m50s
[07:08:41][Step 2/2] 10 runs so far, 0 failures, over 18m55s
[07:08:46][Step 2/2] 11 runs so far, 0 failures, over 19m0s
[07:08:51][Step 2/2] 12 runs so far, 0 failures, over 19m5s
[07:08:56][Step 2/2] 12 runs so far, 0 failures, over 19m10s
[07:09:01][Step 2/2] 12 runs so far, 0 failures, over 19m15s
[07:09:06][Step 2/2] 12 runs so far, 0 failures, over 19m20s
[07:09:11][Step 2/2] 12 runs so far, 0 failures, over 19m25s
[07:09:16][Step 2/2] 12 runs so far, 0 failures, over 19m30s
[07:09:21][Step 2/2] 12 runs so far, 0 failures, over 19m35s
[07:09:26][Step 2/2] 12 runs so far, 0 failures, over 19m40s
[07:09:31][Step 2/2] 12 runs so far, 0 failures, over 19m45s
[07:09:36][Step 2/2] 12 runs so far, 0 failures, over 19m50s
[07:09:41][Step 2/2] 12 runs so far, 0 failures, over 19m55s
[07:09:46][Step 2/2] 12 runs so far, 0 failures, over 20m0s
[07:09:51][Step 2/2] 12 runs so far, 0 failures, over 20m5s
[07:09:56][Step 2/2] 12 runs so far, 0 failures, over 20m10s
[07:10:01][Step 2/2] 12 runs so far, 0 failures, over 20m15s
[07:10:06][Step 2/2] 12 runs so far, 0 failures, over 20m20s
[07:10:11][Step 2/2] 12 runs so far, 0 failures, over 20m25s
[07:10:16][Step 2/2] 12 runs so far, 0 failures, over 20m30s
[07:10:21][Step 2/2] 12 runs so far, 0 failures, over 20m35s
[07:10:26][Step 2/2] 12 runs so far, 0 failures, over 20m40s
[07:10:31][Step 2/2] 12 runs so far, 0 failures, over 20m45s
[07:10:36][Step 2/2] 12 runs so far, 0 failures, over 20m50s
[07:10:41][Step 2/2] 12 runs so far, 0 failures, over 20m55s
[07:10:46][Step 2/2] 12 runs so far, 0 failures, over 21m0s
[07:10:51][Step 2/2] 12 runs so far, 0 failures, over 21m5s
[07:10:56][Step 2/2] 12 runs so far, 0 failures, over 21m10s
[07:11:01][Step 2/2] 12 runs so far, 0 failures, over 21m15s
[07:11:06][Step 2/2] 12 runs so far, 0 failures, over 21m20s
[07:11:11][Step 2/2] 12 runs so far, 0 failures, over 21m25s
[07:11:16][Step 2/2] 12 runs so far, 0 failures, over 21m30s
[07:11:21][Step 2/2] 12 runs so far, 0 failures, over 21m35s
[07:11:26][Step 2/2] 12 runs so far, 0 failures, over 21m40s
[07:11:31][Step 2/2] 12 runs so far, 0 failures, over 21m45s
[07:11:36][Step 2/2] 12 runs so far, 0 failures, over 21m50s
[07:11:41][Step 2/2] 12 runs so far, 0 failures, over 21m55s
[07:11:46][Step 2/2] 12 runs so far, 0 failures, over 22m0s
[07:11:51][Step 2/2] 12 runs so far, 0 failures, over 22m5s
[07:11:56][Step 2/2] 12 runs so far, 0 failures, over 22m10s
[07:12:01][Step 2/2] 12 runs so far, 0 failures, over 22m15s
[07:12:06][Step 2/2] 12 runs so far, 0 failures, over 22m20s
[07:12:11][Step 2/2] 12 runs so far, 0 failures, over 22m25s
[07:12:16][Step 2/2] 12 runs so far, 0 failures, over 22m30s
[07:12:21][Step 2/2] 12 runs so far, 0 failures, over 22m35s
[07:12:26][Step 2/2] 12 runs so far, 0 failures, over 22m40s
[07:12:31][Step 2/2] 12 runs so far, 0 failures, over 22m45s
[07:12:36][Step 2/2] 12 runs so far, 0 failures, over 22m50s
[07:12:41][Step 2/2] 12 runs so far, 0 failures, over 22m55s
[07:12:46][Step 2/2] 12 runs so far, 0 failures, over 23m0s
[07:12:51][Step 2/2] 12 runs so far, 0 failures, over 23m5s
[07:12:56][Step 2/2] 12 runs so far, 0 failures, over 23m10s
[07:13:01][Step 2/2] 12 runs so far, 0 failures, over 23m15s
[07:13:06][Step 2/2] 12 runs so far, 0 failures, over 23m20s
[07:13:11][Step 2/2] 12 runs so far, 0 failures, over 23m25s
[07:13:16][Step 2/2] 12 runs so far, 0 failures, over 23m30s
[07:13:21][Step 2/2] 12 runs so far, 0 failures, over 23m35s
[07:13:26][Step 2/2] 12 runs so far, 0 failures, over 23m40s
[07:13:31][Step 2/2] 12 runs so far, 0 failures, over 23m45s
[07:13:36][Step 2/2] 12 runs so far, 0 failures, over 23m50s
[07:13:41][Step 2/2] 12 runs so far, 0 failures, over 23m55s
[07:13:46][Step 2/2] 12 runs so far, 0 failures, over 24m0s
[07:13:51][Step 2/2] 12 runs so far, 0 failures, over 24m5s
[07:13:56][Step 2/2] 12 runs so far, 0 failures, over 24m10s
[07:14:01][Step 2/2] 12 runs so far, 0 failures, over 24m15s
[07:14:06][Step 2/2] 12 runs so far, 0 failures, over 24m20s
[07:14:11][Step 2/2] 12 runs so far, 0 failures, over 24m25s
[07:14:16][Step 2/2] 12 runs so far, 0 failures, over 24m30s
[07:14:21][Step 2/2] 13 runs so far, 0 failures, over 24m35s
[07:14:26][Step 2/2] 13 runs so far, 0 failures, over 24m40s
[07:14:31][Step 2/2] 13 runs so far, 0 failures, over 24m45s
[07:14:36][Step 2/2] 13 runs so far, 0 failures, over 24m50s
[07:14:41][Step 2/2] 13 runs so far, 0 failures, over 24m55s
[07:14:46][Step 2/2] 13 runs so far, 0 failures, over 25m0s
[07:14:51][Step 2/2] 14 runs so far, 0 failures, over 25m5s
[07:14:56][Step 2/2] 14 runs so far, 0 failures, over 25m10s
[07:15:01][Step 2/2] 15 runs so far, 0 failures, over 25m15s
[07:15:06][Step 2/2] 16 runs so far, 0 failures, over 25m20s
[07:15:11][Step 2/2] 16 runs so far, 0 failures, over 25m25s
[07:15:16][Step 2/2] 16 runs so far, 0 failures, over 25m30s
[07:15:21][Step 2/2] 16 runs so far, 0 failures, over 25m35s
[07:15:26][Step 2/2] 16 runs so far, 0 failures, over 25m40s
[07:15:31][Step 2/2] 16 runs so far, 0 failures, over 25m45s
[07:15:36][Step 2/2] 16 runs so far, 0 failures, over 25m50s
[07:15:41][Step 2/2] 16 runs so far, 0 failures, over 25m55s
[07:15:46][Step 2/2] 16 runs so far, 0 failures, over 26m0s
[07:15:51][Step 2/2] 16 runs so far, 0 failures, over 26m5s
[07:15:56][Step 2/2] 16 runs so far, 0 failures, over 26m10s
[07:16:01][Step 2/2] 16 runs so far, 0 failures, over 26m15s
[07:16:06][Step 2/2] 16 runs so far, 0 failures, over 26m20s
[07:16:11][Step 2/2] 16 runs so far, 0 failures, over 26m25s
[07:16:16][Step 2/2] 16 runs so far, 0 failures, over 26m30s
[07:16:21][Step 2/2] 16 runs so far, 0 failures, over 26m35s
[07:16:26][Step 2/2] 16 runs so far, 0 failures, over 26m40s
[07:16:31][Step 2/2] 16 runs so far, 0 failures, over 26m45s
[07:16:36][Step 2/2] 16 runs so far, 0 failures, over 26m50s
[07:16:41][Step 2/2] 16 runs so far, 0 failures, over 26m55s
[07:16:46][Step 2/2] 16 runs so far, 0 failures, over 27m0s
[07:16:51][Step 2/2] 16 runs so far, 0 failures, over 27m5s
[07:16:56][Step 2/2] 16 runs so far, 0 failures, over 27m10s
[07:17:01][Step 2/2] 16 runs so far, 0 failures, over 27m15s
[07:17:06][Step 2/2] 16 runs so far, 0 failures, over 27m20s
[07:17:11][Step 2/2] 16 runs so far, 0 failures, over 27m25s
[07:17:16][Step 2/2] 16 runs so far, 0 failures, over 27m30s
[07:17:21][Step 2/2] 16 runs so far, 0 failures, over 27m35s
[07:17:26][Step 2/2] 16 runs so far, 0 failures, over 27m40s
[07:17:31][Step 2/2] 16 runs so far, 0 failures, over 27m45s
[07:17:36][Step 2/2] 16 runs so far, 0 failures, over 27m50s
[07:17:41][Step 2/2] 16 runs so far, 0 failures, over 27m55s
[07:17:46][Step 2/2] 16 runs so far, 0 failures, over 28m0s
[07:17:51][Step 2/2] 16 runs so far, 0 failures, over 28m5s
[07:17:56][Step 2/2] 16 runs so far, 0 failures, over 28m10s
[07:18:01][Step 2/2] 16 runs so far, 0 failures, over 28m15s
[07:18:06][Step 2/2] 16 runs so far, 0 failures, over 28m20s
[07:18:11][Step 2/2] 16 runs so far, 0 failures, over 28m25s
[07:18:16][Step 2/2] 16 runs so far, 0 failures, over 28m30s
[07:18:21][Step 2/2] 16 runs so far, 0 failures, over 28m35s
[07:18:26][Step 2/2] 16 runs so far, 0 failures, over 28m40s
[07:18:31][Step 2/2] 16 runs so far, 0 failures, over 28m45s
[07:18:36][Step 2/2] 16 runs so far, 0 failures, over 28m50s
[07:18:41][Step 2/2] 16 runs so far, 0 failures, over 28m55s
[07:18:46][Step 2/2] 16 runs so far, 0 failures, over 29m0s
[07:18:51][Step 2/2] 16 runs so far, 0 failures, over 29m5s
[07:18:56][Step 2/2] 16 runs so far, 0 failures, over 29m10s
[07:19:01][Step 2/2] 16 runs so far, 0 failures, over 29m15s
[07:19:06][Step 2/2] 16 runs so far, 0 failures, over 29m20s
[07:19:11][Step 2/2] 16 runs so far, 0 failures, over 29m25s
[07:19:16][Step 2/2] 16 runs so far, 0 failures, over 29m30s
[07:19:21][Step 2/2] 16 runs so far, 0 failures, over 29m35s
[07:19:26][Step 2/2] 16 runs so far, 0 failures, over 29m40s
[07:19:31][Step 2/2] 16 runs so far, 0 failures, over 29m45s
[07:19:36][Step 2/2] 16 runs so far, 0 failures, over 29m50s
[07:19:41][Step 2/2] 16 runs so far, 0 failures, over 29m55s
[07:19:46][Step 2/2] 16 runs so far, 0 failures, over 30m0s
[07:19:51][Step 2/2] 16 runs so far, 0 failures, over 30m5s
[07:19:56][Step 2/2] 16 runs so far, 0 failures, over 30m10s
[07:20:01][Step 2/2] 16 runs so far, 0 failures, over 30m15s
[07:20:06][Step 2/2] 16 runs so far, 0 failures, over 30m20s
[07:20:11][Step 2/2] 16 runs so far, 0 failures, over 30m25s
[07:20:16][Step 2/2] 16 runs so far, 0 failures, over 30m30s
[07:20:21][Step 2/2] 16 runs so far, 0 failures, over 30m35s
[07:20:26][Step 2/2] 16 runs so far, 0 failures, over 30m40s
[07:20:31][Step 2/2] 17 runs so far, 0 failures, over 30m45s
[07:20:36][Step 2/2] 17 runs so far, 0 failures, over 30m50s
[07:20:41][Step 2/2] 17 runs so far, 0 failures, over 30m55s
[07:20:46][Step 2/2] 17 runs so far, 0 failures, over 31m0s
[07:20:51][Step 2/2] 17 runs so far, 0 failures, over 31m5s
[07:20:56][Step 2/2] 17 runs so far, 0 failures, over 31m10s
[07:21:01][Step 2/2] 17 runs so far, 0 failures, over 31m15s
[07:21:06][Step 2/2] 18 runs so far, 0 failures, over 31m20s
[07:21:11][Step 2/2] 19 runs so far, 0 failures, over 31m25s
[07:21:16][Step 2/2] 20 runs so far, 0 failures, over 31m30s
[07:21:21][Step 2/2] 20 runs so far, 0 failures, over 31m35s
[07:21:26][Step 2/2] 20 runs so far, 0 failures, over 31m40s
[07:21:31][Step 2/2] 20 runs so far, 0 failures, over 31m45s
[07:21:36][Step 2/2] 20 runs so far, 0 failures, over 31m50s
[07:21:41][Step 2/2] 20 runs so far, 0 failures, over 31m55s
[07:21:46][Step 2/2] 20 runs so far, 0 failures, over 32m0s
[07:21:51][Step 2/2] 20 runs so far, 0 failures, over 32m5s
[07:21:56][Step 2/2] 20 runs so far, 0 failures, over 32m10s
[07:22:01][Step 2/2] 20 runs so far, 0 failures, over 32m15s
[07:22:06][Step 2/2] 20 runs so far, 0 failures, over 32m20s
[07:22:11][Step 2/2] 20 runs so far, 0 failures, over 32m25s
[07:22:16][Step 2/2] 20 runs so far, 0 failures, over 32m30s
[07:22:21][Step 2/2] 20 runs so far, 0 failures, over 32m35s
[07:22:26][Step 2/2] 20 runs so far, 0 failures, over 32m40s
[07:22:31][Step 2/2] 20 runs so far, 0 failures, over 32m45s
[07:22:36][Step 2/2] 20 runs so far, 0 failures, over 32m50s
[07:22:41][Step 2/2] 20 runs so far, 0 failures, over 32m55s
[07:22:46][Step 2/2] 20 runs so far, 0 failures, over 33m0s
[07:22:51][Step 2/2] 20 runs so far, 0 failures, over 33m5s
[07:22:56][Step 2/2] 20 runs so far, 0 failures, over 33m10s
[07:23:01][Step 2/2] 20 runs so far, 0 failures, over 33m15s
[07:23:06][Step 2/2] 20 runs so far, 0 failures, over 33m20s
[07:23:11][Step 2/2] 20 runs so far, 0 failures, over 33m25s
[07:23:16][Step 2/2] 20 runs so far, 0 failures, over 33m30s
[07:23:21][Step 2/2] 20 runs so far, 0 failures, over 33m35s
[07:23:26][Step 2/2] 20 runs so far, 0 failures, over 33m40s
[07:23:31][Step 2/2] 20 runs so far, 0 failures, over 33m45s
[07:23:36][Step 2/2] 20 runs so far, 0 failures, over 33m50s
[07:23:41][Step 2/2] 20 runs so far, 0 failures, over 33m55s
[07:23:46][Step 2/2] 20 runs so far, 0 failures, over 34m0s
[07:23:51][Step 2/2] 20 runs so far, 0 failures, over 34m5s
[07:23:56][Step 2/2] 20 runs so far, 0 failures, over 34m10s
[07:24:01][Step 2/2] 20 runs so far, 0 failures, over 34m15s
[07:24:06][Step 2/2] 20 runs so far, 0 failures, over 34m20s
[07:24:11][Step 2/2] 20 runs so far, 0 failures, over 34m25s
[07:24:16][Step 2/2] 20 runs so far, 0 failures, over 34m30s
[07:24:21][Step 2/2] 20 runs so far, 0 failures, over 34m35s
[07:24:26][Step 2/2] 20 runs so far, 0 failures, over 34m40s
[07:24:31][Step 2/2] 20 runs so far, 0 failures, over 34m45s
[07:24:36][Step 2/2] 20 runs so far, 0 failures, over 34m50s
[07:24:41][Step 2/2] 20 runs so far, 0 failures, over 34m55s
[07:24:46][Step 2/2] 20 runs so far, 0 failures, over 35m0s
[07:24:51][Step 2/2] 20 runs so far, 0 failures, over 35m5s
[07:24:56][Step 2/2] 20 runs so far, 0 failures, over 35m10s
[07:25:01][Step 2/2] 20 runs so far, 0 failures, over 35m15s
[07:25:06][Step 2/2] 20 runs so far, 0 failures, over 35m20s
[07:25:11][Step 2/2] 20 runs so far, 0 failures, over 35m25s
[07:25:16][Step 2/2] 20 runs so far, 0 failures, over 35m30s
[07:25:21][Step 2/2] 20 runs so far, 0 failures, over 35m35s
[07:25:26][Step 2/2] 20 runs so far, 0 failures, over 35m40s
[07:25:31][Step 2/2] 20 runs so far, 0 failures, over 35m45s
[07:25:36][Step 2/2] 20 runs so far, 0 failures, over 35m50s
[07:25:41][Step 2/2] 20 runs so far, 0 failures, over 35m55s
[07:25:46][Step 2/2] 20 runs so far, 0 failures, over 36m0s
[07:25:51][Step 2/2] 20 runs so far, 0 failures, over 36m5s
[07:25:56][Step 2/2] 20 runs so far, 0 failures, over 36m10s
[07:26:01][Step 2/2] 20 runs so far, 0 failures, over 36m15s
[07:26:06][Step 2/2] 21 runs so far, 0 failures, over 36m20s
[07:26:11][Step 2/2] 21 runs so far, 0 failures, over 36m25s
[07:26:16][Step 2/2] 21 runs so far, 0 failures, over 36m30s
[07:26:21][Step 2/2] 21 runs so far, 0 failures, over 36m35s
[07:26:26][Step 2/2] 21 runs so far, 0 failures, over 36m40s
[07:26:31][Step 2/2] 21 runs so far, 0 failures, over 36m45s
[07:26:36][Step 2/2] 21 runs so far, 0 failures, over 36m50s
[07:26:41][Step 2/2] 21 runs so far, 0 failures, over 36m55s
[07:26:46][Step 2/2] 21 runs so far, 0 failures, over 37m0s
[07:26:51][Step 2/2] 21 runs so far, 0 failures, over 37m5s
[07:26:56][Step 2/2] 21 runs so far, 0 failures, over 37m10s
[07:27:01][Step 2/2] 21 runs so far, 0 failures, over 37m15s
[07:27:06][Step 2/2] 21 runs so far, 0 failures, over 37m20s
[07:27:11][Step 2/2] 21 runs so far, 0 failures, over 37m25s
[07:27:16][Step 2/2] 22 runs so far, 0 failures, over 37m30s
[07:27:21][Step 2/2] 22 runs so far, 0 failures, over 37m35s
[07:27:26][Step 2/2] 23 runs so far, 0 failures, over 37m40s
[07:27:31][Step 2/2] 23 runs so far, 0 failures, over 37m45s
[07:27:36][Step 2/2] 23 runs so far, 0 failures, over 37m50s
[07:27:41][Step 2/2] 23 runs so far, 0 failures, over 37m55s
[07:27:46][Step 2/2] 24 runs so far, 0 failures, over 38m0s
[07:27:51][Step 2/2] 24 runs so far, 0 failures, over 38m5s
[07:27:56][Step 2/2] 24 runs so far, 0 failures, over 38m10s
[07:28:01][Step 2/2] 24 runs so far, 0 failures, over 38m15s
[07:28:06][Step 2/2] 24 runs so far, 0 failures, over 38m20s
[07:28:11][Step 2/2] 24 runs so far, 0 failures, over 38m25s
[07:28:16][Step 2/2] 24 runs so far, 0 failures, over 38m30s
[07:28:21][Step 2/2] 24 runs so far, 0 failures, over 38m35s
[07:28:26][Step 2/2] 24 runs so far, 0 failures, over 38m40s
[07:28:31][Step 2/2] 24 runs so far, 0 failures, over 38m45s
[07:28:36][Step 2/2] 24 runs so far, 0 failures, over 38m50s
[07:28:41][Step 2/2] 24 runs so far, 0 failures, over 38m55s
[07:28:46][Step 2/2] 24 runs so far, 0 failures, over 39m0s
[07:28:51][Step 2/2] 24 runs so far, 0 failures, over 39m5s
[07:28:56][Step 2/2] 24 runs so far, 0 failures, over 39m10s
[07:29:01][Step 2/2] 24 runs so far, 0 failures, over 39m15s
[07:29:06][Step 2/2] 24 runs so far, 0 failures, over 39m20s
[07:29:11][Step 2/2] 24 runs so far, 0 failures, over 39m25s
[07:29:16][Step 2/2] 24 runs so far, 0 failures, over 39m30s
[07:29:21][Step 2/2] 24 runs so far, 0 failures, over 39m35s
[07:29:26][Step 2/2] 24 runs so far, 0 failures, over 39m40s
[07:29:31][Step 2/2] 24 runs so far, 0 failures, over 39m45s
[07:29:36][Step 2/2] 24 runs so far, 0 failures, over 39m50s
[07:29:41][Step 2/2] 24 runs so far, 0 failures, over 39m55s
[07:29:46][Step 2/2] 24 runs so far, 0 failures, over 40m0s
[07:29:51][Step 2/2] 24 runs so far, 0 failures, over 40m5s
[07:29:56][Step 2/2] 24 runs so far, 0 failures, over 40m10s
[07:30:01][Step 2/2] 24 runs so far, 0 failures, over 40m15s
[07:30:06][Step 2/2] 24 runs so far, 0 failures, over 40m20s
[07:30:11][Step 2/2] 24 runs so far, 0 failures, over 40m25s
[07:30:16][Step 2/2] 24 runs so far, 0 failures, over 40m30s
[07:30:21][Step 2/2] 24 runs so far, 0 failures, over 40m35s
[07:30:26][Step 2/2] 24 runs so far, 0 failures, over 40m40s
[07:30:31][Step 2/2] 24 runs so far, 0 failures, over 40m45s
[07:30:36][Step 2/2] 24 runs so far, 0 failures, over 40m50s
[07:30:41][Step 2/2] 24 runs so far, 0 failures, over 40m55s
[07:30:46][Step 2/2] 24 runs so far, 0 failures, over 41m0s
[07:30:51][Step 2/2] 24 runs so far, 0 failures, over 41m5s
[07:30:56][Step 2/2] 24 runs so far, 0 failures, over 41m10s
[07:31:01][Step 2/2] 24 runs so far, 0 failures, over 41m15s
[07:31:06][Step 2/2] 24 runs so far, 0 failures, over 41m20s
[07:31:11][Step 2/2] 24 runs so far, 0 failures, over 41m25s
[07:31:16][Step 2/2] 24 runs so far, 0 failures, over 41m30s
[07:31:21][Step 2/2] 24 runs so far, 0 failures, over 41m35s
[07:31:26][Step 2/2] 24 runs so far, 0 failures, over 41m40s
[07:31:31][Step 2/2] 24 runs so far, 0 failures, over 41m45s
[07:31:36][Step 2/2] 24 runs so far, 0 failures, over 41m50s
[07:31:41][Step 2/2] 24 runs so far, 0 failures, over 41m55s
[07:31:46][Step 2/2] 24 runs so far, 0 failures, over 42m0s
[07:31:51][Step 2/2] 24 runs so far, 0 failures, over 42m5s
[07:31:56][Step 2/2] 24 runs so far, 0 failures, over 42m10s
[07:32:01][Step 2/2] 24 runs so far, 0 failures, over 42m15s
[07:32:06][Step 2/2] 24 runs so far, 0 failures, over 42m20s
[07:32:11][Step 2/2] 24 runs so far, 0 failures, over 42m25s
[07:32:16][Step 2/2] 24 runs so far, 0 failures, over 42m30s
[07:32:21][Step 2/2] 25 runs so far, 0 failures, over 42m35s
[07:32:26][Step 2/2] 25 runs so far, 0 failures, over 42m40s
[07:32:31][Step 2/2] 25 runs so far, 0 failures, over 42m45s
[07:32:36][Step 2/2] 25 runs so far, 0 failures, over 42m50s
[07:32:41][Step 2/2] 25 runs so far, 0 failures, over 42m55s
[07:32:46][Step 2/2] 25 runs so far, 0 failures, over 43m0s
[07:32:51][Step 2/2] 25 runs so far, 0 failures, over 43m5s
[07:32:56][Step 2/2] 25 runs so far, 0 failures, over 43m10s
[07:33:01][Step 2/2] 25 runs so far, 0 failures, over 43m15s
[07:33:06][Step 2/2] 25 runs so far, 0 failures, over 43m20s
[07:33:11][Step 2/2] 25 runs so far, 0 failures, over 43m25s
[07:33:16][Step 2/2] 25 runs so far, 0 failures, over 43m30s
[07:33:21][Step 2/2] 26 runs so far, 0 failures, over 43m35s
[07:33:26][Step 2/2] 27 runs so far, 0 failures, over 43m40s
[07:33:31][Step 2/2] 27 runs so far, 0 failures, over 43m45s
[07:33:36][Step 2/2] 27 runs so far, 0 failures, over 43m50s
[07:33:41][Step 2/2] 27 runs so far, 0 failures, over 43m55s
[07:33:46][Step 2/2] 27 runs so far, 0 failures, over 44m0s
[07:33:51][Step 2/2] 28 runs so far, 0 failures, over 44m5s
[07:33:56][Step 2/2] 28 runs so far, 0 failures, over 44m10s
[07:34:01][Step 2/2] 28 runs so far, 0 failures, over 44m15s
[07:34:06][Step 2/2] 28 runs so far, 0 failures, over 44m20s
[07:34:11][Step 2/2] 28 runs so far, 0 failures, over 44m25s
[07:34:16][Step 2/2] 28 runs so far, 0 failures, over 44m30s
[07:34:21][Step 2/2] 28 runs so far, 0 failures, over 44m35s
[07:34:26][Step 2/2] 28 runs so far, 0 failures, over 44m40s
[07:34:31][Step 2/2] 28 runs so far, 0 failures, over 44m45s
[07:34:36][Step 2/2] 28 runs so far, 0 failures, over 44m50s
[07:34:41][Step 2/2] 28 runs so far, 0 failures, over 44m55s
[07:34:46][Step 2/2] 28 runs so far, 0 failures, over 45m0s
[07:34:51][Step 2/2] 28 runs so far, 0 failures, over 45m5s
[07:34:56][Step 2/2] 28 runs so far, 0 failures, over 45m10s
[07:35:01][Step 2/2] 28 runs so far, 0 failures, over 45m15s
[07:35:06][Step 2/2] 28 runs so far, 0 failures, over 45m20s
[07:35:11][Step 2/2] 28 runs so far, 0 failures, over 45m25s
[07:35:16][Step 2/2] 28 runs so far, 0 failures, over 45m30s
[07:35:21][Step 2/2] 28 runs so far, 0 failures, over 45m35s
[07:35:26][Step 2/2] 28 runs so far, 0 failures, over 45m40s
[07:35:31][Step 2/2] 28 runs so far, 0 failures, over 45m45s
[07:35:36][Step 2/2] 28 runs so far, 0 failures, over 45m50s
[07:35:41][Step 2/2] 28 runs so far, 0 failures, over 45m55s
[07:35:46][Step 2/2] 28 runs so far, 0 failures, over 46m0s
[07:35:51][Step 2/2] 28 runs so far, 0 failures, over 46m5s
[07:35:56][Step 2/2] 28 runs so far, 0 failures, over 46m10s
[07:36:01][Step 2/2] 28 runs so far, 0 failures, over 46m15s
[07:36:06][Step 2/2] 28 runs so far, 0 failures, over 46m20s
[07:36:11][Step 2/2] 28 runs so far, 0 failures, over 46m25s
[07:36:16][Step 2/2] 28 runs so far, 0 failures, over 46m30s
[07:36:21][Step 2/2] 28 runs so far, 0 failures, over 46m35s
[07:36:26][Step 2/2] 28 runs so far, 0 failures, over 46m40s
[07:36:31][Step 2/2] 28 runs so far, 0 failures, over 46m45s
[07:36:36][Step 2/2] 28 runs so far, 0 failures, over 46m50s
[07:36:41][Step 2/2] 28 runs so far, 0 failures, over 46m55s
[07:36:46][Step 2/2] 28 runs so far, 0 failures, over 47m0s
[07:36:51][Step 2/2] 28 runs so far, 0 failures, over 47m5s
[07:36:56][Step 2/2] 28 runs so far, 0 failures, over 47m10s
[07:37:01][Step 2/2] 28 runs so far, 0 failures, over 47m15s
[07:37:06][Step 2/2] 28 runs so far, 0 failures, over 47m20s
[07:37:11][Step 2/2] 28 runs so far, 0 failures, over 47m25s
[07:37:16][Step 2/2] 28 runs so far, 0 failures, over 47m30s
[07:37:21][Step 2/2] 28 runs so far, 0 failures, over 47m35s
[07:37:26][Step 2/2] 28 runs so far, 0 failures, over 47m40s
[07:37:31][Step 2/2] 28 runs so far, 0 failures, over 47m45s
[07:37:36][Step 2/2] 28 runs so far, 0 failures, over 47m50s
[07:37:41][Step 2/2] 28 runs so far, 0 failures, over 47m55s
[07:37:46][Step 2/2] 28 runs so far, 0 failures, over 48m0s
[07:37:51][Step 2/2] 28 runs so far, 0 failures, over 48m5s
[07:37:56][Step 2/2] 28 runs so far, 0 failures, over 48m10s
[07:38:01][Step 2/2] 28 runs so far, 0 failures, over 48m15s
[07:38:06][Step 2/2] 28 runs so far, 0 failures, over 48m20s
[07:38:11][Step 2/2] 28 runs so far, 0 failures, over 48m25s
[07:38:16][Step 2/2] 28 runs so far, 0 failures, over 48m30s
[07:38:21][Step 2/2] 28 runs so far, 0 failures, over 48m35s
[07:38:26][Step 2/2] 29 runs so far, 0 failures, over 48m40s
[07:38:31][Step 2/2] 29 runs so far, 0 failures, over 48m45s
[07:38:36][Step 2/2] 29 runs so far, 0 failures, over 48m50s
[07:38:41][Step 2/2] 29 runs so far, 0 failures, over 48m55s
[07:38:46][Step 2/2] 29 runs so far, 0 failures, over 49m0s
[07:38:51][Step 2/2] 29 runs so far, 0 failures, over 49m5s
[07:38:56][Step 2/2] 29 runs so far, 0 failures, over 49m10s
[07:39:01][Step 2/2] 29 runs so far, 0 failures, over 49m15s
[07:39:06][Step 2/2] 29 runs so far, 0 failures, over 49m20s
[07:39:11][Step 2/2] 29 runs so far, 0 failures, over 49m25s
[07:39:16][Step 2/2] 29 runs so far, 0 failures, over 49m30s
[07:39:21][Step 2/2] 29 runs so far, 0 failures, over 49m35s
[07:39:26][Step 2/2] 29 runs so far, 0 failures, over 49m40s
[07:39:31][Step 2/2] 29 runs so far, 0 failures, over 49m45s
[07:39:36][Step 2/2] 30 runs so far, 0 failures, over 49m50s
[07:39:41][Step 2/2] 30 runs so far, 0 failures, over 49m55s
[07:39:46][Step 2/2] 31 runs so far, 0 failures, over 50m0s
[07:39:51][Step 2/2] 31 runs so far, 0 failures, over 50m5s
[07:39:56][Step 2/2] 31 runs so far, 0 failures, over 50m10s
[07:40:01][Step 2/2] 31 runs so far, 0 failures, over 50m15s
[07:40:06][Step 2/2] 32 runs so far, 0 failures, over 50m20s
[07:40:11][Step 2/2] 32 runs so far, 0 failures, over 50m25s
[07:40:16][Step 2/2] 32 runs so far, 0 failures, over 50m30s
[07:40:21][Step 2/2] 32 runs so far, 0 failures, over 50m35s
[07:40:26][Step 2/2] 32 runs so far, 0 failures, over 50m40s
[07:40:31][Step 2/2] 32 runs so far, 0 failures, over 50m45s
[07:40:36][Step 2/2] 32 runs so far, 0 failures, over 50m50s
[07:40:41][Step 2/2] 32 runs so far, 0 failures, over 50m55s
[07:40:46][Step 2/2] 32 runs so far, 0 failures, over 51m0s
[07:40:51][Step 2/2] 32 runs so far, 0 failures, over 51m5s
[07:40:56][Step 2/2] 32 runs so far, 0 failures, over 51m10s
[07:41:01][Step 2/2] 32 runs so far, 0 failures, over 51m15s
[07:41:06][Step 2/2] 32 runs so far, 0 failures, over 51m20s
[07:41:11][Step 2/2] 32 runs so far, 0 failures, over 51m25s
[07:41:16][Step 2/2] 32 runs so far, 0 failures, over 51m30s
[07:41:21][Step 2/2] 32 runs so far, 0 failures, over 51m35s
[07:41:26][Step 2/2] 32 runs so far, 0 failures, over 51m40s
[07:41:31][Step 2/2] 32 runs so far, 0 failures, over 51m45s
[07:41:36][Step 2/2] 32 runs so far, 0 failures, over 51m50s
[07:41:41][Step 2/2] 32 runs so far, 0 failures, over 51m55s
[07:41:46][Step 2/2] 32 runs so far, 0 failures, over 52m0s
[07:41:51][Step 2/2] 32 runs so far, 0 failures, over 52m5s
[07:41:56][Step 2/2] 32 runs so far, 0 failures, over 52m10s
[07:42:01][Step 2/2] 32 runs so far, 0 failures, over 52m15s
[07:42:06][Step 2/2] 32 runs so far, 0 failures, over 52m20s
[07:42:11][Step 2/2] 32 runs so far, 0 failures, over 52m25s
[07:42:16][Step 2/2] 32 runs so far, 0 failures, over 52m30s
[07:42:21][Step 2/2] 32 runs so far, 0 failures, over 52m35s
[07:42:26][Step 2/2] 32 runs so far, 0 failures, over 52m40s
[07:42:31][Step 2/2] 32 runs so far, 0 failures, over 52m45s
[07:42:36][Step 2/2] 32 runs so far, 0 failures, over 52m50s
[07:42:41][Step 2/2] 32 runs so far, 0 failures, over 52m55s
[07:42:46][Step 2/2] 32 runs so far, 0 failures, over 53m0s
[07:42:51][Step 2/2] 32 runs so far, 0 failures, over 53m5s
[07:42:56][Step 2/2] 32 runs so far, 0 failures, over 53m10s
[07:43:01][Step 2/2] 32 runs so far, 0 failures, over 53m15s
[07:43:06][Step 2/2] 32 runs so far, 0 failures, over 53m20s
[07:43:11][Step 2/2] 32 runs so far, 0 failures, over 53m25s
[07:43:16][Step 2/2] 32 runs so far, 0 failures, over 53m30s
[07:43:21][Step 2/2] 32 runs so far, 0 failures, over 53m35s
[07:43:26][Step 2/2] 32 runs so far, 0 failures, over 53m40s
[07:43:31][Step 2/2] 32 runs so far, 0 failures, over 53m45s
[07:43:36][Step 2/2] 32 runs so far, 0 failures, over 53m50s
[07:43:41][Step 2/2] 32 runs so far, 0 failures, over 53m55s
[07:43:46][Step 2/2] 32 runs so far, 0 failures, over 54m0s
[07:43:51][Step 2/2] 32 runs so far, 0 failures, over 54m5s
[07:43:56][Step 2/2] 32 runs so far, 0 failures, over 54m10s
[07:44:01][Step 2/2] 32 runs so far, 0 failures, over 54m15s
[07:44:06][Step 2/2] 32 runs so far, 0 failures, over 54m20s
[07:44:11][Step 2/2] 32 runs so far, 0 failures, over 54m25s
[07:44:16][Step 2/2] 32 runs so far, 0 failures, over 54m30s
[07:44:21][Step 2/2] 32 runs so far, 0 failures, over 54m35s
[07:44:26][Step 2/2] 32 runs so far, 0 failures, over 54m40s
[07:44:31][Step 2/2] 32 runs so far, 0 failures, over 54m45s
[07:44:36][Step 2/2] 32 runs so far, 0 failures, over 54m50s
[07:44:41][Step 2/2] 32 runs so far, 0 failures, over 54m55s
[07:44:46][Step 2/2] 32 runs so far, 0 failures, over 55m0s
[07:44:51][Step 2/2] 32 runs so far, 0 failures, over 55m5s
[07:44:56][Step 2/2] 32 runs so far, 0 failures, over 55m10s
[07:45:01][Step 2/2] 32 runs so far, 0 failures, over 55m15s
[07:45:06][Step 2/2] 32 runs so far, 0 failures, over 55m20s
[07:45:11][Step 2/2] 32 runs so far, 0 failures, over 55m25s
[07:45:16][Step 2/2] 32 runs so far, 0 failures, over 55m30s
[07:45:21][Step 2/2] 32 runs so far, 0 failures, over 55m35s
[07:45:26][Step 2/2] 32 runs so far, 0 failures, over 55m40s
[07:45:31][Step 2/2] 32 runs so far, 0 failures, over 55m45s
[07:45:36][Step 2/2] 33 runs so far, 0 failures, over 55m50s
[07:45:41][Step 2/2] 33 runs so far, 0 failures, over 55m55s
[07:45:46][Step 2/2] 33 runs so far, 0 failures, over 56m0s
[07:45:51][Step 2/2] 33 runs so far, 0 failures, over 56m5s
[07:45:56][Step 2/2] 35 runs so far, 0 failures, over 56m10s
[07:46:01][Step 2/2] 35 runs so far, 0 failures, over 56m15s
[07:46:06][Step 2/2] 35 runs so far, 0 failures, over 56m20s
[07:46:11][Step 2/2] 35 runs so far, 0 failures, over 56m25s
[07:46:16][Step 2/2] 35 runs so far, 0 failures, over 56m30s
[07:46:21][Step 2/2] 36 runs so far, 0 failures, over 56m35s
[07:46:26][Step 2/2] 36 runs so far, 0 failures, over 56m40s
[07:46:31][Step 2/2] 36 runs so far, 0 failures, over 56m45s
[07:46:36][Step 2/2] 36 runs so far, 0 failures, over 56m50s
[07:46:41][Step 2/2] 36 runs so far, 0 failures, over 56m55s
[07:46:46][Step 2/2] 36 runs so far, 0 failures, over 57m0s
[07:46:51][Step 2/2] 36 runs so far, 0 failures, over 57m5s
[07:46:56][Step 2/2] 36 runs so far, 0 failures, over 57m10s
[07:47:01][Step 2/2] 36 runs so far, 0 failures, over 57m15s
[07:47:06][Step 2/2] 36 runs so far, 0 failures, over 57m20s
[07:47:11][Step 2/2] 36 runs so far, 0 failures, over 57m25s
[07:47:16][Step 2/2] 36 runs so far, 0 failures, over 57m30s
[07:47:21][Step 2/2] 36 runs so far, 0 failures, over 57m35s
[07:47:26][Step 2/2] 36 runs so far, 0 failures, over 57m40s
[07:47:31][Step 2/2] 36 runs so far, 0 failures, over 57m45s
[07:47:36][Step 2/2] 36 runs so far, 0 failures, over 57m50s
[07:47:41][Step 2/2] 36 runs so far, 0 failures, over 57m55s
[07:47:46][Step 2/2] 36 runs so far, 0 failures, over 58m0s
[07:47:51][Step 2/2] 36 runs so far, 0 failures, over 58m5s
[07:47:56][Step 2/2] 36 runs so far, 0 failures, over 58m10s
[07:48:01][Step 2/2] 36 runs so far, 0 failures, over 58m15s
[07:48:06][Step 2/2] 36 runs so far, 0 failures, over 58m20s
[07:48:11][Step 2/2] 36 runs so far, 0 failures, over 58m25s
[07:48:16][Step 2/2] 36 runs so far, 0 failures, over 58m30s
[07:48:21][Step 2/2] 36 runs so far, 0 failures, over 58m35s
[07:48:26][Step 2/2] 36 runs so far, 0 failures, over 58m40s
[07:48:31][Step 2/2] 36 runs so far, 0 failures, over 58m45s
[07:48:36][Step 2/2] 36 runs so far, 0 failures, over 58m50s
[07:48:41][Step 2/2] 36 runs so far, 0 failures, over 58m55s
[07:48:46][Step 2/2] 36 runs so far, 0 failures, over 59m0s
[07:48:51][Step 2/2] 36 runs so far, 0 failures, over 59m5s
[07:48:56][Step 2/2] 36 runs so far, 0 failures, over 59m10s
[07:49:01][Step 2/2] 36 runs so far, 0 failures, over 59m15s
[07:49:06][Step 2/2] 36 runs so far, 0 failures, over 59m20s
[07:49:11][Step 2/2] 36 runs so far, 0 failures, over 59m25s
[07:49:16][Step 2/2] 36 runs so far, 0 failures, over 59m30s
[07:49:21][Step 2/2] 36 runs so far, 0 failures, over 59m35s
[07:49:26][Step 2/2] 36 runs so far, 0 failures, over 59m40s
[07:49:31][Step 2/2] 36 runs so far, 0 failures, over 59m45s
[07:49:36][Step 2/2] 36 runs so far, 0 failures, over 59m50s
[07:49:41][Step 2/2] 36 runs so far, 0 failures, over 59m55s
[07:49:46][Step 2/2] 36 runs so far, 0 failures, over 1h0m0s
[07:49:51][Step 2/2] 36 runs so far, 0 failures, over 1h0m5s
[07:49:56][Step 2/2] 36 runs so far, 0 failures, over 1h0m10s
[07:50:01][Step 2/2] 36 runs so far, 0 failures, over 1h0m15s
[07:50:06][Step 2/2] 36 runs so far, 0 failures, over 1h0m20s
[07:50:11][Step 2/2] 36 runs so far, 0 failures, over 1h0m25s
[07:50:16][Step 2/2] 36 runs so far, 0 failures, over 1h0m30s
[07:50:21][Step 2/2] 36 runs so far, 0 failures, over 1h0m35s
[07:50:26][Step 2/2] 36 runs so far, 0 failures, over 1h0m40s
[07:50:31][Step 2/2] 36 runs so far, 0 failures, over 1h0m45s
[07:50:36][Step 2/2] 36 runs so far, 0 failures, over 1h0m50s
[07:50:41][Step 2/2] 36 runs so far, 0 failures, over 1h0m55s
[07:50:46][Step 2/2] 36 runs so far, 0 failures, over 1h1m0s
[07:50:51][Step 2/2] 36 runs so far, 0 failures, over 1h1m5s
[07:50:56][Step 2/2] 36 runs so far, 0 failures, over 1h1m10s
[07:51:01][Step 2/2] 36 runs so far, 0 failures, over 1h1m15s
[07:51:06][Step 2/2] 36 runs so far, 0 failures, over 1h1m20s
[07:51:11][Step 2/2] 36 runs so far, 0 failures, over 1h1m25s
[07:51:16][Step 2/2] 36 runs so far, 0 failures, over 1h1m30s
[07:51:21][Step 2/2] 36 runs so far, 0 failures, over 1h1m35s
[07:51:26][Step 2/2] 36 runs so far, 0 failures, over 1h1m40s
[07:51:31][Step 2/2] 36 runs so far, 0 failures, over 1h1m45s
[07:51:36][Step 2/2] 36 runs so far, 0 failures, over 1h1m50s
[07:51:41][Step 2/2] 36 runs so far, 0 failures, over 1h1m55s
[07:51:46][Step 2/2] 37 runs so far, 0 failures, over 1h2m0s
[07:51:51][Step 2/2] 37 runs so far, 0 failures, over 1h2m5s
[07:51:56][Step 2/2] 37 runs so far, 0 failures, over 1h2m10s
[07:52:01][Step 2/2] 37 runs so far, 0 failures, over 1h2m15s
[07:52:06][Step 2/2] 37 runs so far, 0 failures, over 1h2m20s
[07:52:11][Step 2/2] 37 runs so far, 0 failures, over 1h2m25s
[07:52:16][Step 2/2] 38 runs so far, 0 failures, over 1h2m30s
[07:52:21][Step 2/2] 38 runs so far, 0 failures, over 1h2m35s
[07:52:26][Step 2/2] 39 runs so far, 0 failures, over 1h2m40s
[07:52:31][Step 2/2] 40 runs so far, 0 failures, over 1h2m45s
[07:52:36][Step 2/2] 40 runs so far, 0 failures, over 1h2m50s
[07:52:41][Step 2/2] 40 runs so far, 0 failures, over 1h2m55s
[07:52:46][Step 2/2] 40 runs so far, 0 failures, over 1h3m0s
[07:52:51][Step 2/2] 40 runs so far, 0 failures, over 1h3m5s
[07:52:56][Step 2/2] 40 runs so far, 0 failures, over 1h3m10s
[07:53:01][Step 2/2] 40 runs so far, 0 failures, over 1h3m15s
[07:53:06][Step 2/2] 40 runs so far, 0 failures, over 1h3m20s
[07:53:11][Step 2/2] 40 runs so far, 0 failures, over 1h3m25s
[07:53:16][Step 2/2] 40 runs so far, 0 failures, over 1h3m30s
[07:53:21][Step 2/2] 40 runs so far, 0 failures, over 1h3m35s
[07:53:26][Step 2/2] 40 runs so far, 0 failures, over 1h3m40s
[07:53:31][Step 2/2] 40 runs so far, 0 failures, over 1h3m45s
[07:53:36][Step 2/2] 40 runs so far, 0 failures, over 1h3m50s
[07:53:41][Step 2/2] 40 runs so far, 0 failures, over 1h3m55s
[07:53:46][Step 2/2] 40 runs so far, 0 failures, over 1h4m0s
[07:53:51][Step 2/2] 40 runs so far, 0 failures, over 1h4m5s
[07:53:56][Step 2/2] 40 runs so far, 0 failures, over 1h4m10s
[07:54:01][Step 2/2] 40 runs so far, 0 failures, over 1h4m15s
[07:54:06][Step 2/2] 40 runs so far, 0 failures, over 1h4m20s
[07:54:11][Step 2/2] 40 runs so far, 0 failures, over 1h4m25s
[07:54:16][Step 2/2] 40 runs so far, 0 failures, over 1h4m30s
[07:54:21][Step 2/2] 40 runs so far, 0 failures, over 1h4m35s
[07:54:26][Step 2/2] 40 runs so far, 0 failures, over 1h4m40s
[07:54:31][Step 2/2] 40 runs so far, 0 failures, over 1h4m45s
[07:54:36][Step 2/2] 40 runs so far, 0 failures, over 1h4m50s
[07:54:41][Step 2/2] 40 runs so far, 0 failures, over 1h4m55s
[07:54:46][Step 2/2] 40 runs so far, 0 failures, over 1h5m0s
[07:54:51][Step 2/2] 40 runs so far, 0 failures, over 1h5m5s
[07:54:56][Step 2/2] 40 runs so far, 0 failures, over 1h5m10s
[07:55:01][Step 2/2] 40 runs so far, 0 failures, over 1h5m15s
[07:55:06][Step 2/2] 40 runs so far, 0 failures, over 1h5m20s
[07:55:11][Step 2/2] 40 runs so far, 0 failures, over 1h5m25s
[07:55:16][Step 2/2] 40 runs so far, 0 failures, over 1h5m30s
[07:55:21][Step 2/2] 40 runs so far, 0 failures, over 1h5m35s
[07:55:26][Step 2/2] 40 runs so far, 0 failures, over 1h5m40s
[07:55:31][Step 2/2] 40 runs so far, 0 failures, over 1h5m45s
[07:55:36][Step 2/2] 40 runs so far, 0 failures, over 1h5m50s
[07:55:41][Step 2/2] 40 runs so far, 0 failures, over 1h5m55s
[07:55:46][Step 2/2] 40 runs so far, 0 failures, over 1h6m0s
[07:55:51][Step 2/2] 40 runs so far, 0 failures, over 1h6m5s
[07:55:56][Step 2/2] 40 runs so far, 0 failures, over 1h6m10s
[07:56:01][Step 2/2] 40 runs so far, 0 failures, over 1h6m15s
[07:56:06][Step 2/2] 40 runs so far, 0 failures, over 1h6m20s
[07:56:11][Step 2/2] 40 runs so far, 0 failures, over 1h6m25s
[07:56:16][Step 2/2] 40 runs so far, 0 failures, over 1h6m30s
[07:56:21][Step 2/2] 40 runs so far, 0 failures, over 1h6m35s
[07:56:26][Step 2/2] 40 runs so far, 0 failures, over 1h6m40s
[07:56:31][Step 2/2] 40 runs so far, 0 failures, over 1h6m45s
[07:56:36][Step 2/2] 40 runs so far, 0 failures, over 1h6m50s
[07:56:41][Step 2/2] 40 runs so far, 0 failures, over 1h6m55s
[07:56:46][Step 2/2] 40 runs so far, 0 failures, over 1h7m0s
[07:56:51][Step 2/2] 40 runs so far, 0 failures, over 1h7m5s
[07:56:56][Step 2/2] 40 runs so far, 0 failures, over 1h7m10s
[07:57:01][Step 2/2] 40 runs so far, 0 failures, over 1h7m15s
[07:57:06][Step 2/2] 40 runs so far, 0 failures, over 1h7m20s
[07:57:11][Step 2/2] 40 runs so far, 0 failures, over 1h7m25s
[07:57:16][Step 2/2] 40 runs so far, 0 failures, over 1h7m30s
[07:57:21][Step 2/2] 40 runs so far, 0 failures, over 1h7m35s
[07:57:26][Step 2/2] 40 runs so far, 0 failures, over 1h7m40s
[07:57:31][Step 2/2] 40 runs so far, 0 failures, over 1h7m45s
[07:57:36][Step 2/2] 40 runs so far, 0 failures, over 1h7m50s
[07:57:41][Step 2/2] 40 runs so far, 0 failures, over 1h7m55s
[07:57:46][Step 2/2] 40 runs so far, 0 failures, over 1h8m0s
[07:57:51][Step 2/2] 40 runs so far, 0 failures, over 1h8m5s
[07:57:56][Step 2/2] 40 runs so far, 0 failures, over 1h8m10s
[07:58:01][Step 2/2] 40 runs so far, 0 failures, over 1h8m15s
[07:58:06][Step 2/2] 41 runs so far, 0 failures, over 1h8m20s
[07:58:11][Step 2/2] 41 runs so far, 0 failures, over 1h8m25s
[07:58:16][Step 2/2] 41 runs so far, 0 failures, over 1h8m30s
[07:58:21][Step 2/2] 41 runs so far, 0 failures, over 1h8m35s
[07:58:26][Step 2/2] 42 runs so far, 0 failures, over 1h8m40s
[07:58:31][Step 2/2] 42 runs so far, 0 failures, over 1h8m45s
[07:58:36][Step 2/2] 42 runs so far, 0 failures, over 1h8m50s
[07:58:41][Step 2/2] 43 runs so far, 0 failures, over 1h8m55s
[07:58:46][Step 2/2] 44 runs so far, 0 failures, over 1h9m0s
[07:58:51][Step 2/2] 44 runs so far, 0 failures, over 1h9m5s
[07:58:56][Step 2/2] 44 runs so far, 0 failures, over 1h9m10s
[07:59:01][Step 2/2] 44 runs so far, 0 failures, over 1h9m15s
[07:59:06][Step 2/2] 44 runs so far, 0 failures, over 1h9m20s
[07:59:11][Step 2/2] 44 runs so far, 0 failures, over 1h9m25s
[07:59:16][Step 2/2] 44 runs so far, 0 failures, over 1h9m30s
[07:59:21][Step 2/2] 44 runs so far, 0 failures, over 1h9m35s
[07:59:26][Step 2/2] 44 runs so far, 0 failures, over 1h9m40s
[07:59:31][Step 2/2] 44 runs so far, 0 failures, over 1h9m45s
[07:59:36][Step 2/2] 44 runs so far, 0 failures, over 1h9m50s
[07:59:41][Step 2/2] 44 runs so far, 0 failures, over 1h9m55s
[07:59:46][Step 2/2] 44 runs so far, 0 failures, over 1h10m0s
[07:59:51][Step 2/2] 44 runs so far, 0 failures, over 1h10m5s
[07:59:56][Step 2/2] 44 runs so far, 0 failures, over 1h10m10s
[08:00:01][Step 2/2] 44 runs so far, 0 failures, over 1h10m15s
[08:00:06][Step 2/2] 44 runs so far, 0 failures, over 1h10m20s
[08:00:11][Step 2/2] 44 runs so far, 0 failures, over 1h10m25s
[08:00:16][Step 2/2] 44 runs so far, 0 failures, over 1h10m30s
[08:00:21][Step 2/2] 44 runs so far, 0 failures, over 1h10m35s
[08:00:26][Step 2/2] 44 runs so far, 0 failures, over 1h10m40s
[08:00:31][Step 2/2] 44 runs so far, 0 failures, over 1h10m45s
[08:00:36][Step 2/2] 44 runs so far, 0 failures, over 1h10m50s
[08:00:41][Step 2/2] 44 runs so far, 0 failures, over 1h10m55s
[08:00:46][Step 2/2] 44 runs so far, 0 failures, over 1h11m0s
[08:00:51][Step 2/2] 44 runs so far, 0 failures, over 1h11m5s
[08:00:56][Step 2/2] 44 runs so far, 0 failures, over 1h11m10s
[08:01:01][Step 2/2] 44 runs so far, 0 failures, over 1h11m15s
[08:01:06][Step 2/2] 44 runs so far, 0 failures, over 1h11m20s
[08:01:11][Step 2/2] 44 runs so far, 0 failures, over 1h11m25s
[08:01:16][Step 2/2] 44 runs so far, 0 failures, over 1h11m30s
[08:01:21][Step 2/2] 44 runs so far, 0 failures, over 1h11m35s
[08:01:26][Step 2/2] 44 runs so far, 0 failures, over 1h11m40s
[08:01:31][Step 2/2] 44 runs so far, 0 failures, over 1h11m45s
[08:01:36][Step 2/2] 44 runs so far, 0 failures, over 1h11m50s
[08:01:41][Step 2/2] 44 runs so far, 0 failures, over 1h11m55s
[08:01:46][Step 2/2] 44 runs so far, 0 failures, over 1h12m0s
[08:01:51][Step 2/2] 44 runs so far, 0 failures, over 1h12m5s
[08:01:56][Step 2/2] 44 runs so far, 0 failures, over 1h12m10s
[08:02:01][Step 2/2] 44 runs so far, 0 failures, over 1h12m15s
[08:02:06][Step 2/2] 44 runs so far, 0 failures, over 1h12m20s
[08:02:11][Step 2/2] 44 runs so far, 0 failures, over 1h12m25s
[08:02:16][Step 2/2] 44 runs so far, 0 failures, over 1h12m30s
[08:02:21][Step 2/2] 44 runs so far, 0 failures, over 1h12m35s
[08:02:26][Step 2/2] 44 runs so far, 0 failures, over 1h12m40s
[08:02:31][Step 2/2] 44 runs so far, 0 failures, over 1h12m45s
[08:02:36][Step 2/2] 44 runs so far, 0 failures, over 1h12m50s
[08:02:41][Step 2/2] 44 runs so far, 0 failures, over 1h12m55s
[08:02:46][Step 2/2] 44 runs so far, 0 failures, over 1h13m0s
[08:02:51][Step 2/2] 44 runs so far, 0 failures, over 1h13m5s
[08:02:56][Step 2/2] 44 runs so far, 0 failures, over 1h13m10s
[08:03:01][Step 2/2] 44 runs so far, 0 failures, over 1h13m15s
[08:03:06][Step 2/2] 44 runs so far, 0 failures, over 1h13m20s
[08:03:11][Step 2/2] 44 runs so far, 0 failures, over 1h13m25s
[08:03:16][Step 2/2] 44 runs so far, 0 failures, over 1h13m30s
[08:03:21][Step 2/2] 44 runs so far, 0 failures, over 1h13m35s
[08:03:26][Step 2/2] 44 runs so far, 0 failures, over 1h13m40s
[08:03:31][Step 2/2] 44 runs so far, 0 failures, over 1h13m45s
[08:03:36][Step 2/2] 44 runs so far, 0 failures, over 1h13m50s
[08:03:41][Step 2/2] 44 runs so far, 0 failures, over 1h13m55s
[08:03:46][Step 2/2] 44 runs so far, 0 failures, over 1h14m0s
[08:03:51][Step 2/2] 44 runs so far, 0 failures, over 1h14m5s
[08:03:56][Step 2/2] 44 runs so far, 0 failures, over 1h14m10s
[08:04:01][Step 2/2] 44 runs so far, 0 failures, over 1h14m15s
[08:04:06][Step 2/2] 44 runs so far, 0 failures, over 1h14m20s
[08:04:11][Step 2/2] 45 runs so far, 0 failures, over 1h14m25s
[08:04:16][Step 2/2] 45 runs so far, 0 failures, over 1h14m30s
[08:04:21][Step 2/2] 45 runs so far, 0 failures, over 1h14m35s
[08:04:26][Step 2/2] 45 runs so far, 0 failures, over 1h14m40s
[08:04:31][Step 2/2] 45 runs so far, 0 failures, over 1h14m45s
[08:04:36][Step 2/2] 46 runs so far, 0 failures, over 1h14m50s
[08:04:41][Step 2/2] 46 runs so far, 0 failures, over 1h14m55s
[08:04:46][Step 2/2] 46 runs so far, 0 failures, over 1h15m0s
[08:04:51][Step 2/2] 46 runs so far, 0 failures, over 1h15m5s
[08:04:56][Step 2/2] 46 runs so far, 0 failures, over 1h15m10s
[08:05:01][Step 2/2] 48 runs so far, 0 failures, over 1h15m15s
[08:05:06][Step 2/2] 48 runs so far, 0 failures, over 1h15m20s
[08:05:11][Step 2/2] 48 runs so far, 0 failures, over 1h15m25s
[08:05:16][Step 2/2] 48 runs so far, 0 failures, over 1h15m30s
[08:05:21][Step 2/2] 48 runs so far, 0 failures, over 1h15m35s
[08:05:26][Step 2/2] 48 runs so far, 0 failures, over 1h15m40s
[08:05:31][Step 2/2] 48 runs so far, 0 failures, over 1h15m45s
[08:05:36][Step 2/2] 48 runs so far, 0 failures, over 1h15m50s
[08:05:41][Step 2/2] 48 runs so far, 0 failures, over 1h15m55s
[08:05:46][Step 2/2] 48 runs so far, 0 failures, over 1h16m0s
[08:05:51][Step 2/2] 48 runs so far, 0 failures, over 1h16m5s
[08:05:56][Step 2/2] 48 runs so far, 0 failures, over 1h16m10s
[08:06:01][Step 2/2] 48 runs so far, 0 failures, over 1h16m15s
[08:06:06][Step 2/2] 48 runs so far, 0 failures, over 1h16m20s
[08:06:11][Step 2/2] 48 runs so far, 0 failures, over 1h16m25s
[08:06:16][Step 2/2] 48 runs so far, 0 failures, over 1h16m30s
[08:06:21][Step 2/2] 48 runs so far, 0 failures, over 1h16m35s
[08:06:26][Step 2/2] 48 runs so far, 0 failures, over 1h16m40s
[08:06:31][Step 2/2] 48 runs so far, 0 failures, over 1h16m45s
[08:06:36][Step 2/2] 48 runs so far, 0 failures, over 1h16m50s
[08:06:41][Step 2/2] 48 runs so far, 0 failures, over 1h16m55s
[08:06:46][Step 2/2] 48 runs so far, 0 failures, over 1h17m0s
[08:06:51][Step 2/2] 48 runs so far, 0 failures, over 1h17m5s
[08:06:56][Step 2/2] 48 runs so far, 0 failures, over 1h17m10s
[08:07:01][Step 2/2] 48 runs so far, 0 failures, over 1h17m15s
[08:07:06][Step 2/2] 48 runs so far, 0 failures, over 1h17m20s
[08:07:11][Step 2/2] 48 runs so far, 0 failures, over 1h17m25s
[08:07:16][Step 2/2] 48 runs so far, 0 failures, over 1h17m30s
[08:07:21][Step 2/2] 48 runs so far, 0 failures, over 1h17m35s
[08:07:26][Step 2/2] 48 runs so far, 0 failures, over 1h17m40s
[08:07:31][Step 2/2] 48 runs so far, 0 failures, over 1h17m45s
[08:07:36][Step 2/2] 48 runs so far, 0 failures, over 1h17m50s
[08:07:41][Step 2/2] 48 runs so far, 0 failures, over 1h17m55s
[08:07:46][Step 2/2] 48 runs so far, 0 failures, over 1h18m0s
[08:07:51][Step 2/2] 48 runs so far, 0 failures, over 1h18m5s
[08:07:56][Step 2/2] 48 runs so far, 0 failures, over 1h18m10s
[08:08:01][Step 2/2] 48 runs so far, 0 failures, over 1h18m15s
[08:08:06][Step 2/2] 48 runs so far, 0 failures, over 1h18m20s
[08:08:11][Step 2/2] 48 runs so far, 0 failures, over 1h18m25s
[08:08:16][Step 2/2] 48 runs so far, 0 failures, over 1h18m30s
[08:08:21][Step 2/2] 48 runs so far, 0 failures, over 1h18m35s
[08:08:26][Step 2/2] 48 runs so far, 0 failures, over 1h18m40s
[08:08:31][Step 2/2] 48 runs so far, 0 failures, over 1h18m45s
[08:08:36][Step 2/2] 48 runs so far, 0 failures, over 1h18m50s
[08:08:41][Step 2/2] 48 runs so far, 0 failures, over 1h18m55s
[08:08:46][Step 2/2] 48 runs so far, 0 failures, over 1h19m0s
[08:08:51][Step 2/2] 48 runs so far, 0 failures, over 1h19m5s
[08:08:56][Step 2/2] 48 runs so far, 0 failures, over 1h19m10s
[08:09:01][Step 2/2] 48 runs so far, 0 failures, over 1h19m15s
[08:09:06][Step 2/2] 48 runs so far, 0 failures, over 1h19m20s
[08:09:11][Step 2/2] 48 runs so far, 0 failures, over 1h19m25s
[08:09:16][Step 2/2] 48 runs so far, 0 failures, over 1h19m30s
[08:09:21][Step 2/2] 48 runs so far, 0 failures, over 1h19m35s
[08:09:26][Step 2/2] 48 runs so far, 0 failures, over 1h19m40s
[08:09:31][Step 2/2] 48 runs so far, 0 failures, over 1h19m45s
[08:09:36][Step 2/2] 48 runs so far, 0 failures, over 1h19m50s
[08:09:41][Step 2/2] 48 runs so far, 0 failures, over 1h19m55s
[08:09:46][Step 2/2] 48 runs so far, 0 failures, over 1h20m0s
[08:09:51][Step 2/2] 48 runs so far, 0 failures, over 1h20m5s
[08:09:56][Step 2/2] 48 runs so far, 0 failures, over 1h20m10s
[08:10:01][Step 2/2] 48 runs so far, 0 failures, over 1h20m15s
[08:10:06][Step 2/2] 48 runs so far, 0 failures, over 1h20m20s
[08:10:11][Step 2/2] 48 runs so far, 0 failures, over 1h20m25s
[08:10:16][Step 2/2] 48 runs so far, 0 failures, over 1h20m30s
[08:10:21][Step 2/2] 48 runs so far, 0 failures, over 1h20m35s
[08:10:26][Step 2/2] 49 runs so far, 0 failures, over 1h20m40s
[08:10:31][Step 2/2] 49 runs so far, 0 failures, over 1h20m45s
[08:10:36][Step 2/2] 49 runs so far, 0 failures, over 1h20m50s
[08:10:41][Step 2/2] 49 runs so far, 0 failures, over 1h20m55s
[08:10:46][Step 2/2] 49 runs so far, 0 failures, over 1h21m0s
[08:10:51][Step 2/2] 50 runs so far, 0 failures, over 1h21m5s
[08:10:56][Step 2/2] 50 runs so far, 0 failures, over 1h21m10s
[08:11:01][Step 2/2] 50 runs so far, 0 failures, over 1h21m15s
[08:11:06][Step 2/2] 51 runs so far, 0 failures, over 1h21m20s
[08:11:11][Step 2/2] 51 runs so far, 0 failures, over 1h21m25s
[08:11:16][Step 2/2] 51 runs so far, 0 failures, over 1h21m30s
[08:11:21][Step 2/2] 52 runs so far, 0 failures, over 1h21m35s
[08:11:26][Step 2/2] 52 runs so far, 0 failures, over 1h21m40s
[08:11:31][Step 2/2] 52 runs so far, 0 failures, over 1h21m45s
[08:11:36][Step 2/2] 52 runs so far, 0 failures, over 1h21m50s
[08:11:41][Step 2/2] 52 runs so far, 0 failures, over 1h21m55s
[08:11:46][Step 2/2] 52 runs so far, 0 failures, over 1h22m0s
[08:11:51][Step 2/2] 52 runs so far, 0 failures, over 1h22m5s
[08:11:56][Step 2/2] 52 runs so far, 0 failures, over 1h22m10s
[08:12:01][Step 2/2] 52 runs so far, 0 failures, over 1h22m15s
[08:12:06][Step 2/2] 52 runs so far, 0 failures, over 1h22m20s
[08:12:11][Step 2/2] 52 runs so far, 0 failures, over 1h22m25s
[08:12:16][Step 2/2] 52 runs so far, 0 failures, over 1h22m30s
[08:12:21][Step 2/2] 52 runs so far, 0 failures, over 1h22m35s
[08:12:26][Step 2/2] 52 runs so far, 0 failures, over 1h22m40s
[08:12:31][Step 2/2] 52 runs so far, 0 failures, over 1h22m45s
[08:12:36][Step 2/2] 52 runs so far, 0 failures, over 1h22m50s
[08:12:41][Step 2/2] 52 runs so far, 0 failures, over 1h22m55s
[08:12:46][Step 2/2] 52 runs so far, 0 failures, over 1h23m0s
[08:12:51][Step 2/2] 52 runs so far, 0 failures, over 1h23m5s
[08:12:56][Step 2/2] 52 runs so far, 0 failures, over 1h23m10s
[08:13:01][Step 2/2] 52 runs so far, 0 failures, over 1h23m15s
[08:13:06][Step 2/2] 52 runs so far, 0 failures, over 1h23m20s
[08:13:11][Step 2/2] 52 runs so far, 0 failures, over 1h23m25s
[08:13:16][Step 2/2] 52 runs so far, 0 failures, over 1h23m30s
[08:13:21][Step 2/2] 52 runs so far, 0 failures, over 1h23m35s
[08:13:26][Step 2/2] 52 runs so far, 0 failures, over 1h23m40s
[08:13:31][Step 2/2] 52 runs so far, 0 failures, over 1h23m45s
[08:13:36][Step 2/2] 52 runs so far, 0 failures, over 1h23m50s
[08:13:41][Step 2/2] 52 runs so far, 0 failures, over 1h23m55s
[08:13:46][Step 2/2] 52 runs so far, 0 failures, over 1h24m0s
[08:13:51][Step 2/2] 52 runs so far, 0 failures, over 1h24m5s
[08:13:56][Step 2/2] 52 runs so far, 0 failures, over 1h24m10s
[08:14:01][Step 2/2] 52 runs so far, 0 failures, over 1h24m15s
[08:14:06][Step 2/2] 52 runs so far, 0 failures, over 1h24m20s
[08:14:11][Step 2/2] 52 runs so far, 0 failures, over 1h24m25s
[08:14:16][Step 2/2] 52 runs so far, 0 failures, over 1h24m30s
[08:14:21][Step 2/2] 52 runs so far, 0 failures, over 1h24m35s
[08:14:26][Step 2/2] 52 runs so far, 0 failures, over 1h24m40s
[08:14:31][Step 2/2] 52 runs so far, 0 failures, over 1h24m45s
[08:14:36][Step 2/2] 52 runs so far, 0 failures, over 1h24m50s
[08:14:41][Step 2/2] 52 runs so far, 0 failures, over 1h24m55s
[08:14:46][Step 2/2] 52 runs so far, 0 failures, over 1h25m0s
[08:14:51][Step 2/2] 52 runs so far, 0 failures, over 1h25m5s
[08:14:56][Step 2/2] 52 runs so far, 0 failures, over 1h25m10s
[08:15:01][Step 2/2] 52 runs so far, 0 failures, over 1h25m15s
[08:15:06][Step 2/2] 52 runs so far, 0 failures, over 1h25m20s
[08:15:11][Step 2/2] 52 runs so far, 0 failures, over 1h25m25s
[08:15:16][Step 2/2] 52 runs so far, 0 failures, over 1h25m30s
[08:15:21][Step 2/2] 52 runs so far, 0 failures, over 1h25m35s
[08:15:26][Step 2/2] 52 runs so far, 0 failures, over 1h25m40s
[08:15:31][Step 2/2] 52 runs so far, 0 failures, over 1h25m45s
[08:15:36][Step 2/2] 52 runs so far, 0 failures, over 1h25m50s
[08:15:41][Step 2/2] 52 runs so far, 0 failures, over 1h25m55s
[08:15:46][Step 2/2] 52 runs so far, 0 failures, over 1h26m0s
[08:15:51][Step 2/2] 52 runs so far, 0 failures, over 1h26m5s
[08:15:56][Step 2/2] 52 runs so far, 0 failures, over 1h26m10s
[08:16:01][Step 2/2] 52 runs so far, 0 failures, over 1h26m15s
[08:16:06][Step 2/2] 52 runs so far, 0 failures, over 1h26m20s
[08:16:11][Step 2/2] 52 runs so far, 0 failures, over 1h26m25s
[08:16:16][Step 2/2] 52 runs so far, 0 failures, over 1h26m30s
[08:16:21][Step 2/2] 52 runs so far, 0 failures, over 1h26m35s
[08:16:26][Step 2/2] 52 runs so far, 0 failures, over 1h26m40s
[08:16:31][Step 2/2] 52 runs so far, 0 failures, over 1h26m45s
[08:16:36][Step 2/2] 52 runs so far, 0 failures, over 1h26m50s
[08:16:41][Step 2/2] 52 runs so far, 0 failures, over 1h26m55s
[08:16:46][Step 2/2] 52 runs so far, 0 failures, over 1h27m0s
[08:16:51][Step 2/2] 52 runs so far, 0 failures, over 1h27m5s
[08:16:56][Step 2/2] 53 runs so far, 0 failures, over 1h27m10s
[08:17:01][Step 2/2] 53 runs so far, 0 failures, over 1h27m15s
[08:17:06][Step 2/2] 54 runs so far, 0 failures, over 1h27m20s
[08:17:11][Step 2/2] 54 runs so far, 0 failures, over 1h27m25s
[08:17:16][Step 2/2] 54 runs so far, 0 failures, over 1h27m30s
[08:17:21][Step 2/2] 54 runs so far, 0 failures, over 1h27m35s
[08:17:26][Step 2/2] 54 runs so far, 0 failures, over 1h27m40s
[08:17:31][Step 2/2] 55 runs so far, 0 failures, over 1h27m45s
[08:17:36][Step 2/2] 56 runs so far, 0 failures, over 1h27m50s
[08:17:41][Step 2/2] 56 runs so far, 0 failures, over 1h27m55s
[08:17:46][Step 2/2] 56 runs so far, 0 failures, over 1h28m0s
[08:17:51][Step 2/2] 56 runs so far, 0 failures, over 1h28m5s
[08:17:56][Step 2/2] 56 runs so far, 0 failures, over 1h28m10s
[08:18:01][Step 2/2] 56 runs so far, 0 failures, over 1h28m15s
[08:18:06][Step 2/2] 56 runs so far, 0 failures, over 1h28m20s
[08:18:11][Step 2/2] 56 runs so far, 0 failures, over 1h28m25s
[08:18:16][Step 2/2] 56 runs so far, 0 failures, over 1h28m30s
[08:18:21][Step 2/2] 56 runs so far, 0 failures, over 1h28m35s
[08:18:26][Step 2/2] 56 runs so far, 0 failures, over 1h28m40s
[08:18:31][Step 2/2] 56 runs so far, 0 failures, over 1h28m45s
[08:18:36][Step 2/2] 56 runs so far, 0 failures, over 1h28m50s
[08:18:41][Step 2/2] 56 runs so far, 0 failures, over 1h28m55s
[08:18:46][Step 2/2] 56 runs so far, 0 failures, over 1h29m0s
[08:18:51][Step 2/2] 56 runs so far, 0 failures, over 1h29m5s
[08:18:56][Step 2/2] 56 runs so far, 0 failures, over 1h29m10s
[08:19:01][Step 2/2] 56 runs so far, 0 failures, over 1h29m15s
[08:19:06][Step 2/2] 56 runs so far, 0 failures, over 1h29m20s
[08:19:11][Step 2/2] 56 runs so far, 0 failures, over 1h29m25s
[08:19:16][Step 2/2] 56 runs so far, 0 failures, over 1h29m30s
[08:19:21][Step 2/2] 56 runs so far, 0 failures, over 1h29m35s
[08:19:26][Step 2/2] 56 runs so far, 0 failures, over 1h29m40s
[08:19:31][Step 2/2] 56 runs so far, 0 failures, over 1h29m45s
[08:19:36][Step 2/2] 56 runs so far, 0 failures, over 1h29m50s
[08:19:41][Step 2/2] 56 runs so far, 0 failures, over 1h29m55s
[08:19:46][Step 2/2] 56 runs so far, 0 failures, over 1h30m0s
[08:19:51][Step 2/2] 56 runs so far, 0 failures, over 1h30m5s
[08:19:56][Step 2/2] 56 runs so far, 0 failures, over 1h30m10s
[08:20:01][Step 2/2] 56 runs so far, 0 failures, over 1h30m15s
[08:20:06][Step 2/2] 56 runs so far, 0 failures, over 1h30m20s
[08:20:11][Step 2/2] 56 runs so far, 0 failures, over 1h30m25s
[08:20:16][Step 2/2] 56 runs so far, 0 failures, over 1h30m30s
[08:20:21][Step 2/2] 56 runs so far, 0 failures, over 1h30m35s
[08:20:26][Step 2/2] 56 runs so far, 0 failures, over 1h30m40s
[08:20:31][Step 2/2] 56 runs so far, 0 failures, over 1h30m45s
[08:20:36][Step 2/2] 56 runs so far, 0 failures, over 1h30m50s
[08:20:41][Step 2/2] 56 runs so far, 0 failures, over 1h30m55s
[08:20:46][Step 2/2] 56 runs so far, 0 failures, over 1h31m0s
[08:20:51][Step 2/2] 56 runs so far, 0 failures, over 1h31m5s
[08:20:56][Step 2/2] 56 runs so far, 0 failures, over 1h31m10s
[08:21:01][Step 2/2] 56 runs so far, 0 failures, over 1h31m15s
[08:21:06][Step 2/2] 56 runs so far, 0 failures, over 1h31m20s
[08:21:11][Step 2/2] 56 runs so far, 0 failures, over 1h31m25s
[08:21:16][Step 2/2] 56 runs so far, 0 failures, over 1h31m30s
[08:21:21][Step 2/2] 56 runs so far, 0 failures, over 1h31m35s
[08:21:26][Step 2/2] 56 runs so far, 0 failures, over 1h31m40s
[08:21:31][Step 2/2] 56 runs so far, 0 failures, over 1h31m45s
[08:21:36][Step 2/2] 56 runs so far, 0 failures, over 1h31m50s
[08:21:41][Step 2/2] 56 runs so far, 0 failures, over 1h31m55s
[08:21:46][Step 2/2] 56 runs so far, 0 failures, over 1h32m0s
[08:21:51][Step 2/2] 56 runs so far, 0 failures, over 1h32m5s
[08:21:56][Step 2/2] 56 runs so far, 0 failures, over 1h32m10s
[08:22:01][Step 2/2] 56 runs so far, 0 failures, over 1h32m15s
[08:22:06][Step 2/2] 56 runs so far, 0 failures, over 1h32m20s
[08:22:11][Step 2/2] 56 runs so far, 0 failures, over 1h32m25s
[08:22:16][Step 2/2] 56 runs so far, 0 failures, over 1h32m30s
[08:22:21][Step 2/2] 56 runs so far, 0 failures, over 1h32m35s
[08:22:26][Step 2/2] 56 runs so far, 0 failures, over 1h32m40s
[08:22:31][Step 2/2] 56 runs so far, 0 failures, over 1h32m45s
[08:22:36][Step 2/2] 56 runs so far, 0 failures, over 1h32m50s
[08:22:41][Step 2/2] 56 runs so far, 0 failures, over 1h32m55s
[08:22:46][Step 2/2] 56 runs so far, 0 failures, over 1h33m0s
[08:22:51][Step 2/2] 56 runs so far, 0 failures, over 1h33m5s
[08:22:56][Step 2/2] 56 runs so far, 0 failures, over 1h33m10s
[08:23:01][Step 2/2] 56 runs so far, 0 failures, over 1h33m15s
[08:23:06][Step 2/2] 57 runs so far, 0 failures, over 1h33m20s
[08:23:11][Step 2/2] 57 runs so far, 0 failures, over 1h33m25s
[08:23:16][Step 2/2] 57 runs so far, 0 failures, over 1h33m30s
[08:23:21][Step 2/2] 57 runs so far, 0 failures, over 1h33m35s
[08:23:26][Step 2/2] 57 runs so far, 0 failures, over 1h33m40s
[08:23:31][Step 2/2] 58 runs so far, 0 failures, over 1h33m45s
[08:23:36][Step 2/2] 58 runs so far, 0 failures, over 1h33m50s
[08:23:41][Step 2/2] 58 runs so far, 0 failures, over 1h33m55s
[08:23:46][Step 2/2] 59 runs so far, 0 failures, over 1h34m0s
[08:23:51][Step 2/2] 59 runs so far, 0 failures, over 1h34m5s
[08:23:56][Step 2/2] 60 runs so far, 0 failures, over 1h34m10s
[08:24:01][Step 2/2] 60 runs so far, 0 failures, over 1h34m15s
[08:24:06][Step 2/2] 60 runs so far, 0 failures, over 1h34m20s
[08:24:11][Step 2/2] 60 runs so far, 0 failures, over 1h34m25s
[08:24:16][Step 2/2] 60 runs so far, 0 failures, over 1h34m30s
[08:24:21][Step 2/2] 60 runs so far, 0 failures, over 1h34m35s
[08:24:26][Step 2/2] 60 runs so far, 0 failures, over 1h34m40s
[08:24:31][Step 2/2] 60 runs so far, 0 failures, over 1h34m45s
[08:24:36][Step 2/2] 60 runs so far, 0 failures, over 1h34m50s
[08:24:41][Step 2/2] 60 runs so far, 0 failures, over 1h34m55s
[08:24:46][Step 2/2] 60 runs so far, 0 failures, over 1h35m0s
[08:24:51][Step 2/2] 60 runs so far, 0 failures, over 1h35m5s
[08:24:56][Step 2/2] 60 runs so far, 0 failures, over 1h35m10s
[08:25:01][Step 2/2] 60 runs so far, 0 failures, over 1h35m15s
[08:25:06][Step 2/2] 60 runs so far, 0 failures, over 1h35m20s
[08:25:11][Step 2/2] 60 runs so far, 0 failures, over 1h35m25s
[08:25:16][Step 2/2] 60 runs so far, 0 failures, over 1h35m30s
[08:25:21][Step 2/2] 60 runs so far, 0 failures, over 1h35m35s
[08:25:26][Step 2/2] 60 runs so far, 0 failures, over 1h35m40s
[08:25:31][Step 2/2] 60 runs so far, 0 failures, over 1h35m45s
[08:25:36][Step 2/2] 60 runs so far, 0 failures, over 1h35m50s
[08:25:41][Step 2/2] 60 runs so far, 0 failures, over 1h35m55s
[08:25:46][Step 2/2] 60 runs so far, 0 failures, over 1h36m0s
[08:25:51][Step 2/2] 60 runs so far, 0 failures, over 1h36m5s
[08:25:56][Step 2/2] 60 runs so far, 0 failures, over 1h36m10s
[08:26:01][Step 2/2] 60 runs so far, 0 failures, over 1h36m15s
[08:26:06][Step 2/2] 60 runs so far, 0 failures, over 1h36m20s
[08:26:11][Step 2/2] 60 runs so far, 0 failures, over 1h36m25s
[08:26:16][Step 2/2] 60 runs so far, 0 failures, over 1h36m30s
[08:26:21][Step 2/2] 60 runs so far, 0 failures, over 1h36m35s
[08:26:26][Step 2/2] 60 runs so far, 0 failures, over 1h36m40s
[08:26:31][Step 2/2] 60 runs so far, 0 failures, over 1h36m45s
[08:26:36][Step 2/2] 60 runs so far, 0 failures, over 1h36m50s
[08:26:41][Step 2/2] 60 runs so far, 0 failures, over 1h36m55s
[08:26:46][Step 2/2] 60 runs so far, 0 failures, over 1h37m0s
[08:26:51][Step 2/2] 60 runs so far, 0 failures, over 1h37m5s
[08:26:56][Step 2/2] 60 runs so far, 0 failures, over 1h37m10s
[08:27:01][Step 2/2] 60 runs so far, 0 failures, over 1h37m15s
[08:27:06][Step 2/2] 60 runs so far, 0 failures, over 1h37m20s
[08:27:11][Step 2/2] 60 runs so far, 0 failures, over 1h37m25s
[08:27:16][Step 2/2] 60 runs so far, 0 failures, over 1h37m30s
[08:27:21][Step 2/2] 60 runs so far, 0 failures, over 1h37m35s
[08:27:26][Step 2/2] 60 runs so far, 0 failures, over 1h37m40s
[08:27:31][Step 2/2] 60 runs so far, 0 failures, over 1h37m45s
[08:27:36][Step 2/2] 60 runs so far, 0 failures, over 1h37m50s
[08:27:41][Step 2/2] 60 runs so far, 0 failures, over 1h37m55s
[08:27:46][Step 2/2] 60 runs so far, 0 failures, over 1h38m0s
[08:27:51][Step 2/2] 60 runs so far, 0 failures, over 1h38m5s
[08:27:56][Step 2/2] 60 runs so far, 0 failures, over 1h38m10s
[08:28:01][Step 2/2] 60 runs so far, 0 failures, over 1h38m15s
[08:28:06][Step 2/2] 60 runs so far, 0 failures, over 1h38m20s
[08:28:11][Step 2/2] 60 runs so far, 0 failures, over 1h38m25s
[08:28:16][Step 2/2] 60 runs so far, 0 failures, over 1h38m30s
[08:28:21][Step 2/2] 60 runs so far, 0 failures, over 1h38m35s
[08:28:26][Step 2/2] 60 runs so far, 0 failures, over 1h38m40s
[08:28:31][Step 2/2] 60 runs so far, 0 failures, over 1h38m45s
[08:28:36][Step 2/2] 60 runs so far, 0 failures, over 1h38m50s
[08:28:41][Step 2/2] 60 runs so far, 0 failures, over 1h38m55s
[08:28:46][Step 2/2] 60 runs so far, 0 failures, over 1h39m0s
[08:28:51][Step 2/2] 60 runs so far, 0 failures, over 1h39m5s
[08:28:56][Step 2/2] 60 runs so far, 0 failures, over 1h39m10s
[08:29:01][Step 2/2] 60 runs so far, 0 failures, over 1h39m15s
[08:29:06][Step 2/2] 60 runs so far, 0 failures, over 1h39m20s
[08:29:11][Step 2/2] 60 runs so far, 0 failures, over 1h39m25s
[08:29:16][Step 2/2] 60 runs so far, 0 failures, over 1h39m30s
[08:29:21][Step 2/2] 60 runs so far, 0 failures, over 1h39m35s
[08:29:26][Step 2/2] 60 runs so far, 0 failures, over 1h39m40s
[08:29:31][Step 2/2] 60 runs so far, 0 failures, over 1h39m45s
[08:29:36][Step 2/2] 60 runs so far, 0 failures, over 1h39m50s
[08:29:41][Step 2/2] 60 runs so far, 0 failures, over 1h39m55s
[08:29:46][Step 2/2] 60 runs so far, 0 failures, over 1h40m0s
[08:29:51][Step 2/2] 60 runs so far, 0 failures, over 1h40m5s
[08:29:56][Step 2/2] 61 runs so far, 0 failures, over 1h40m10s
[08:30:01][Step 2/2] 62 runs so far, 0 failures, over 1h40m15s
[08:30:06][Step 2/2] 63 runs so far, 0 failures, over 1h40m20s
[08:30:11][Step 2/2] 63 runs so far, 0 failures, over 1h40m25s
[08:30:16][Step 2/2] 64 runs so far, 0 failures, over 1h40m30s
[08:30:21][Step 2/2] 64 runs so far, 0 failures, over 1h40m35s
[08:30:26][Step 2/2] 64 runs so far, 0 failures, over 1h40m40s
[08:30:31][Step 2/2] 64 runs so far, 0 failures, over 1h40m45s
[08:30:36][Step 2/2] 64 runs so far, 0 failures, over 1h40m50s
[08:30:41][Step 2/2] 64 runs so far, 0 failures, over 1h40m55s
[08:30:46][Step 2/2] 64 runs so far, 0 failures, over 1h41m0s
[08:30:51][Step 2/2] 64 runs so far, 0 failures, over 1h41m5s
[08:30:56][Step 2/2] 64 runs so far, 0 failures, over 1h41m10s
[08:31:01][Step 2/2] 64 runs so far, 0 failures, over 1h41m15s
[08:31:06][Step 2/2] 64 runs so far, 0 failures, over 1h41m20s
[08:31:11][Step 2/2] 64 runs so far, 0 failures, over 1h41m25s
[08:31:16][Step 2/2] 64 runs so far, 0 failures, over 1h41m30s
[08:31:21][Step 2/2] 64 runs so far, 0 failures, over 1h41m35s
[08:31:26][Step 2/2] 64 runs so far, 0 failures, over 1h41m40s
[08:31:31][Step 2/2] 64 runs so far, 0 failures, over 1h41m45s
[08:31:36][Step 2/2] 64 runs so far, 0 failures, over 1h41m50s
[08:31:41][Step 2/2] 64 runs so far, 0 failures, over 1h41m55s
[08:31:46][Step 2/2] 64 runs so far, 0 failures, over 1h42m0s
[08:31:51][Step 2/2] 64 runs so far, 0 failures, over 1h42m5s
[08:31:56][Step 2/2] 64 runs so far, 0 failures, over 1h42m10s
[08:32:01][Step 2/2] 64 runs so far, 0 failures, over 1h42m15s
[08:32:06][Step 2/2] 64 runs so far, 0 failures, over 1h42m20s
[08:32:11][Step 2/2] 64 runs so far, 0 failures, over 1h42m25s
[08:32:16][Step 2/2] 64 runs so far, 0 failures, over 1h42m30s
[08:32:21][Step 2/2] 64 runs so far, 0 failures, over 1h42m35s
[08:32:26][Step 2/2] 64 runs so far, 0 failures, over 1h42m40s
[08:32:31][Step 2/2] 64 runs so far, 0 failures, over 1h42m45s
[08:32:36][Step 2/2] 64 runs so far, 0 failures, over 1h42m50s
[08:32:41][Step 2/2] 64 runs so far, 0 failures, over 1h42m55s
[08:32:46][Step 2/2] 64 runs so far, 0 failures, over 1h43m0s
[08:32:51][Step 2/2] 64 runs so far, 0 failures, over 1h43m5s
[08:32:56][Step 2/2] 64 runs so far, 0 failures, over 1h43m10s
[08:33:01][Step 2/2] 64 runs so far, 0 failures, over 1h43m15s
[08:33:06][Step 2/2] 64 runs so far, 0 failures, over 1h43m20s
[08:33:11][Step 2/2] 64 runs so far, 0 failures, over 1h43m25s
[08:33:16][Step 2/2] 64 runs so far, 0 failures, over 1h43m30s
[08:33:21][Step 2/2] 64 runs so far, 0 failures, over 1h43m35s
[08:33:26][Step 2/2] 64 runs so far, 0 failures, over 1h43m40s
[08:33:31][Step 2/2] 64 runs so far, 0 failures, over 1h43m45s
[08:33:36][Step 2/2] 64 runs so far, 0 failures, over 1h43m50s
[08:33:41][Step 2/2] 64 runs so far, 0 failures, over 1h43m55s
[08:33:46][Step 2/2] 64 runs so far, 0 failures, over 1h44m0s
[08:33:51][Step 2/2] 64 runs so far, 0 failures, over 1h44m5s
[08:33:56][Step 2/2] 64 runs so far, 0 failures, over 1h44m10s
[08:34:01][Step 2/2] 64 runs so far, 0 failures, over 1h44m15s
[08:34:06][Step 2/2] 64 runs so far, 0 failures, over 1h44m20s
[08:34:11][Step 2/2] 64 runs so far, 0 failures, over 1h44m25s
[08:34:16][Step 2/2] 64 runs so far, 0 failures, over 1h44m30s
[08:34:21][Step 2/2] 64 runs so far, 0 failures, over 1h44m35s
[08:34:26][Step 2/2] 64 runs so far, 0 failures, over 1h44m40s
[08:34:31][Step 2/2] 64 runs so far, 0 failures, over 1h44m45s
[08:34:36][Step 2/2] 64 runs so far, 0 failures, over 1h44m50s
[08:34:41][Step 2/2] 64 runs so far, 0 failures, over 1h44m55s
[08:34:46][Step 2/2] 64 runs so far, 0 failures, over 1h45m0s
[08:34:51][Step 2/2] 64 runs so far, 0 failures, over 1h45m5s
[08:34:56][Step 2/2] 64 runs so far, 0 failures, over 1h45m10s
[08:35:01][Step 2/2] 64 runs so far, 0 failures, over 1h45m15s
[08:35:06][Step 2/2] 64 runs so far, 0 failures, over 1h45m20s
[08:35:11][Step 2/2] 64 runs so far, 0 failures, over 1h45m25s
[08:35:16][Step 2/2] 64 runs so far, 0 failures, over 1h45m30s
[08:35:21][Step 2/2] 64 runs so far, 0 failures, over 1h45m35s
[08:35:26][Step 2/2] 64 runs so far, 0 failures, over 1h45m40s
[08:35:31][Step 2/2] 64 runs so far, 0 failures, over 1h45m45s
[08:35:36][Step 2/2] 64 runs so far, 0 failures, over 1h45m50s
[08:35:41][Step 2/2] 64 runs so far, 0 failures, over 1h45m55s
[08:35:46][Step 2/2] 64 runs so far, 0 failures, over 1h46m0s
[08:35:51][Step 2/2] 64 runs so far, 0 failures, over 1h46m5s
[08:35:56][Step 2/2] 64 runs so far, 0 failures, over 1h46m10s
[08:36:01][Step 2/2] 64 runs so far, 0 failures, over 1h46m15s
[08:36:06][Step 2/2] 64 runs so far, 0 failures, over 1h46m20s
[08:36:11][Step 2/2] 65 runs so far, 0 failures, over 1h46m25s
[08:36:16][Step 2/2] 65 runs so far, 0 failures, over 1h46m30s
[08:36:21][Step 2/2] 65 runs so far, 0 failures, over 1h46m35s
[08:36:26][Step 2/2] 66 runs so far, 0 failures, over 1h46m40s
[08:36:31][Step 2/2] 67 runs so far, 0 failures, over 1h46m45s
[08:36:36][Step 2/2] 68 runs so far, 0 failures, over 1h46m50s
[08:36:41][Step 2/2] 68 runs so far, 0 failures, over 1h46m55s
[08:36:46][Step 2/2] 68 runs so far, 0 failures, over 1h47m0s
[08:36:51][Step 2/2] 68 runs so far, 0 failures, over 1h47m5s
[08:36:56][Step 2/2] 68 runs so far, 0 failures, over 1h47m10s
[08:37:01][Step 2/2] 68 runs so far, 0 failures, over 1h47m15s
[08:37:06][Step 2/2] 68 runs so far, 0 failures, over 1h47m20s
[08:37:11][Step 2/2] 68 runs so far, 0 failures, over 1h47m25s
[08:37:16][Step 2/2] 68 runs so far, 0 failures, over 1h47m30s
[08:37:21][Step 2/2] 68 runs so far, 0 failures, over 1h47m35s
[08:37:26][Step 2/2] 68 runs so far, 0 failures, over 1h47m40s
[08:37:31][Step 2/2] 68 runs so far, 0 failures, over 1h47m45s
[08:37:36][Step 2/2] 68 runs so far, 0 failures, over 1h47m50s
[08:37:41][Step 2/2] 68 runs so far, 0 failures, over 1h47m55s
[08:37:46][Step 2/2] 68 runs so far, 0 failures, over 1h48m0s
[08:37:51][Step 2/2] 68 runs so far, 0 failures, over 1h48m5s
[08:37:56][Step 2/2] 68 runs so far, 0 failures, over 1h48m10s
[08:38:01][Step 2/2] 68 runs so far, 0 failures, over 1h48m15s
[08:38:06][Step 2/2] 68 runs so far, 0 failures, over 1h48m20s
[08:38:11][Step 2/2] 68 runs so far, 0 failures, over 1h48m25s
[08:38:16][Step 2/2] 68 runs so far, 0 failures, over 1h48m30s
[08:38:21][Step 2/2] 68 runs so far, 0 failures, over 1h48m35s
[08:38:26][Step 2/2] 68 runs so far, 0 failures, over 1h48m40s
[08:38:31][Step 2/2] 68 runs so far, 0 failures, over 1h48m45s
[08:38:36][Step 2/2] 68 runs so far, 0 failures, over 1h48m50s
[08:38:41][Step 2/2] 68 runs so far, 0 failures, over 1h48m55s
[08:38:46][Step 2/2] 68 runs so far, 0 failures, over 1h49m0s
[08:38:51][Step 2/2] 68 runs so far, 0 failures, over 1h49m5s
[08:38:56][Step 2/2] 68 runs so far, 0 failures, over 1h49m10s
[08:39:01][Step 2/2] 68 runs so far, 0 failures, over 1h49m15s
[08:39:06][Step 2/2] 68 runs so far, 0 failures, over 1h49m20s
[08:39:11][Step 2/2] 68 runs so far, 0 failures, over 1h49m25s
[08:39:16][Step 2/2] 68 runs so far, 0 failures, over 1h49m30s
[08:39:21][Step 2/2] 68 runs so far, 0 failures, over 1h49m35s
[08:39:26][Step 2/2] 68 runs so far, 0 failures, over 1h49m40s
[08:39:31][Step 2/2] 68 runs so far, 0 failures, over 1h49m45s
[08:39:36][Step 2/2] 68 runs so far, 0 failures, over 1h49m50s
[08:39:41][Step 2/2] 68 runs so far, 0 failures, over 1h49m55s
[08:39:46][Step 2/2] 68 runs so far, 0 failures, over 1h50m0s
[08:39:51][Step 2/2] 68 runs so far, 0 failures, over 1h50m5s
[08:39:56][Step 2/2] 68 runs so far, 0 failures, over 1h50m10s
[08:40:01][Step 2/2] 68 runs so far, 0 failures, over 1h50m15s
[08:40:06][Step 2/2] 68 runs so far, 0 failures, over 1h50m20s
[08:40:11][Step 2/2] 68 runs so far, 0 failures, over 1h50m25s
[08:40:16][Step 2/2] 68 runs so far, 0 failures, over 1h50m30s
[08:40:21][Step 2/2] 68 runs so far, 0 failures, over 1h50m35s
[08:40:26][Step 2/2] 68 runs so far, 0 failures, over 1h50m40s
[08:40:31][Step 2/2] 68 runs so far, 0 failures, over 1h50m45s
[08:40:36][Step 2/2] 68 runs so far, 0 failures, over 1h50m50s
[08:40:41][Step 2/2] 68 runs so far, 0 failures, over 1h50m55s
[08:40:46][Step 2/2] 68 runs so far, 0 failures, over 1h51m0s
[08:40:51][Step 2/2] 68 runs so far, 0 failures, over 1h51m5s
[08:40:56][Step 2/2] 68 runs so far, 0 failures, over 1h51m10s
[08:41:01][Step 2/2] 68 runs so far, 0 failures, over 1h51m15s
[08:41:06][Step 2/2] 68 runs so far, 0 failures, over 1h51m20s
[08:41:11][Step 2/2] 68 runs so far, 0 failures, over 1h51m25s
[08:41:16][Step 2/2] 68 runs so far, 0 failures, over 1h51m30s
[08:41:21][Step 2/2] 68 runs so far, 0 failures, over 1h51m35s
[08:41:26][Step 2/2] 68 runs so far, 0 failures, over 1h51m40s
[08:41:31][Step 2/2] 68 runs so far, 0 failures, over 1h51m45s
[08:41:36][Step 2/2] 68 runs so far, 0 failures, over 1h51m50s
[08:41:41][Step 2/2] 68 runs so far, 0 failures, over 1h51m55s
[08:41:46][Step 2/2] 68 runs so far, 0 failures, over 1h52m0s
[08:41:51][Step 2/2] 68 runs so far, 0 failures, over 1h52m5s
[08:41:56][Step 2/2] 68 runs so far, 0 failures, over 1h52m10s
[08:42:01][Step 2/2] 68 runs so far, 0 failures, over 1h52m15s
[08:42:06][Step 2/2] 68 runs so far, 0 failures, over 1h52m20s
[08:42:11][Step 2/2] 68 runs so far, 0 failures, over 1h52m25s
[08:42:16][Step 2/2] 68 runs so far, 0 failures, over 1h52m30s
[08:42:21][Step 2/2] 69 runs so far, 0 failures, over 1h52m35s
[08:42:26][Step 2/2] 69 runs so far, 0 failures, over 1h52m40s
[08:42:31][Step 2/2] 69 runs so far, 0 failures, over 1h52m45s
[08:42:36][Step 2/2] 69 runs so far, 0 failures, over 1h52m50s
[08:42:41][Step 2/2] 69 runs so far, 0 failures, over 1h52m55s
[08:42:46][Step 2/2] 69 runs so far, 0 failures, over 1h53m0s
[08:42:51][Step 2/2] 69 runs so far, 0 failures, over 1h53m5s
[08:42:53][Step 2/2]
[08:42:53][Step 2/2] I181018 08:36:29.010236 1 rand.go:75 Random seed: -5024729656378207025
[08:42:53][Step 2/2] === RUN TestUpdateRangeAddressing
[08:42:53][Step 2/2] I181018 08:36:29.053693 148 util/protoutil/randnullability.go:94 inserting null for (storagepb.ReplicaState).GCThreshold: false
[08:42:53][Step 2/2] I181018 08:36:29.053795 148 util/protoutil/randnullability.go:94 inserting null for (storagepb.ReplicaState).TxnSpanGCThreshold: false
[08:42:53][Step 2/2] --- PASS: TestUpdateRangeAddressing (0.18s)
[08:42:53][Step 2/2] === RUN TestUpdateRangeAddressingSplitMeta1
[08:42:53][Step 2/2] --- PASS: TestUpdateRangeAddressingSplitMeta1 (0.01s)
[08:42:53][Step 2/2] === RUN TestOnlyValidAndNotFull
[08:42:53][Step 2/2] === RUN TestOnlyValidAndNotFull/0,0
[08:42:53][Step 2/2] === RUN TestOnlyValidAndNotFull/1,0
[08:42:53][Step 2/2] === RUN TestOnlyValidAndNotFull/0,1
[08:42:53][Step 2/2] === RUN TestOnlyValidAndNotFull/1,1
[08:42:53][Step 2/2] === RUN TestOnlyValidAndNotFull/2,0
[08:42:53][Step 2/2] === RUN TestOnlyValidAndNotFull/2,1
[08:42:53][Step 2/2] === RUN TestOnlyValidAndNotFull/2,2
[08:42:53][Step 2/2] === RUN TestOnlyValidAndNotFull/1,2
[08:42:53][Step 2/2] === RUN TestOnlyValidAndNotFull/0,2
[08:42:53][Step 2/2] --- PASS: TestOnlyValidAndNotFull (0.02s)
[08:42:53][Step 2/2] --- PASS: TestOnlyValidAndNotFull/0,0 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestOnlyValidAndNotFull/1,0 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestOnlyValidAndNotFull/0,1 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestOnlyValidAndNotFull/1,1 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestOnlyValidAndNotFull/2,0 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestOnlyValidAndNotFull/2,1 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestOnlyValidAndNotFull/2,2 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestOnlyValidAndNotFull/1,2 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestOnlyValidAndNotFull/0,2 (0.00s)
[08:42:53][Step 2/2] === RUN TestSelectGoodPanic
[08:42:53][Step 2/2] --- PASS: TestSelectGoodPanic (0.01s)
[08:42:53][Step 2/2] === RUN TestCandidateSelection
[08:42:53][Step 2/2] === RUN TestCandidateSelection/best-0:0
[08:42:53][Step 2/2] === RUN TestCandidateSelection/worst-0:0
[08:42:53][Step 2/2] === RUN TestCandidateSelection/good-0:0
[08:42:53][Step 2/2] === RUN TestCandidateSelection/bad-0:0
[08:42:53][Step 2/2] === RUN TestCandidateSelection/best-0:0,0:1
[08:42:53][Step 2/2] === RUN TestCandidateSelection/worst-0:0,0:1
[08:42:53][Step 2/2] === RUN TestCandidateSelection/good-0:0,0:1
[08:42:53][Step 2/2] === RUN TestCandidateSelection/bad-0:0,0:1
[08:42:53][Step 2/2] === RUN TestCandidateSelection/best-0:0,0:1,0:2
[08:42:53][Step 2/2] === RUN TestCandidateSelection/worst-0:0,0:1,0:2
[08:42:53][Step 2/2] === RUN TestCandidateSelection/good-0:0,0:1,0:2
[08:42:53][Step 2/2] === RUN TestCandidateSelection/bad-0:0,0:1,0:2
[08:42:53][Step 2/2] === RUN TestCandidateSelection/best-1:0,0:1
[08:42:53][Step 2/2] === RUN TestCandidateSelection/worst-1:0,0:1
[08:42:53][Step 2/2] === RUN TestCandidateSelection/good-1:0,0:1
[08:42:53][Step 2/2] === RUN TestCandidateSelection/bad-1:0,0:1
[08:42:53][Step 2/2] === RUN TestCandidateSelection/best-1:0,0:1,0:2
[08:42:53][Step 2/2] === RUN TestCandidateSelection/worst-1:0,0:1,0:2
[08:42:53][Step 2/2] === RUN TestCandidateSelection/good-1:0,0:1,0:2
[08:42:53][Step 2/2] === RUN TestCandidateSelection/bad-1:0,0:1,0:2
[08:42:53][Step 2/2] === RUN TestCandidateSelection/best-1:0,1:1,0:2
[08:42:53][Step 2/2] === RUN TestCandidateSelection/worst-1:0,1:1,0:2
[08:42:53][Step 2/2] === RUN TestCandidateSelection/good-1:0,1:1,0:2
[08:42:53][Step 2/2] === RUN TestCandidateSelection/bad-1:0,1:1,0:2
[08:42:53][Step 2/2] === RUN TestCandidateSelection/best-1:0,1:1,0:2,0:3
[08:42:53][Step 2/2] === RUN TestCandidateSelection/worst-1:0,1:1,0:2,0:3
[08:42:53][Step 2/2] === RUN TestCandidateSelection/good-1:0,1:1,0:2,0:3
[08:42:53][Step 2/2] === RUN TestCandidateSelection/bad-1:0,1:1,0:2,0:3
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection (0.02s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/best-0:0 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/worst-0:0 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/good-0:0 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/bad-0:0 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/best-0:0,0:1 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/worst-0:0,0:1 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/good-0:0,0:1 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/bad-0:0,0:1 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/best-0:0,0:1,0:2 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/worst-0:0,0:1,0:2 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/good-0:0,0:1,0:2 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/bad-0:0,0:1,0:2 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/best-1:0,0:1 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/worst-1:0,0:1 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/good-1:0,0:1 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/bad-1:0,0:1 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/best-1:0,0:1,0:2 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/worst-1:0,0:1,0:2 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/good-1:0,0:1,0:2 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/bad-1:0,0:1,0:2 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/best-1:0,1:1,0:2 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/worst-1:0,1:1,0:2 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/good-1:0,1:1,0:2 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/bad-1:0,1:1,0:2 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/best-1:0,1:1,0:2,0:3 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/worst-1:0,1:1,0:2,0:3 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/good-1:0,1:1,0:2,0:3 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestCandidateSelection/bad-1:0,1:1,0:2,0:3 (0.00s)
[08:42:53][Step 2/2] === RUN TestBetterThan
[08:42:53][Step 2/2] --- PASS: TestBetterThan (0.01s)
[08:42:53][Step 2/2] === RUN TestBestRebalanceTarget
[08:42:53][Step 2/2] --- PASS: TestBestRebalanceTarget (0.01s)
[08:42:53][Step 2/2] === RUN TestStoreHasReplica
[08:42:53][Step 2/2] --- PASS: TestStoreHasReplica (0.00s)
[08:42:53][Step 2/2] === RUN TestConstraintsCheck
[08:42:53][Step 2/2] === RUN TestConstraintsCheck/required_constraint
[08:42:53][Step 2/2] === RUN TestConstraintsCheck/required_locality_constraints
[08:42:53][Step 2/2] === RUN TestConstraintsCheck/prohibited_constraints
[08:42:53][Step 2/2] === RUN TestConstraintsCheck/prohibited_locality_constraints
[08:42:53][Step 2/2] === RUN TestConstraintsCheck/positive_constraints_are_ignored
[08:42:53][Step 2/2] === RUN TestConstraintsCheck/positive_locality_constraints_are_ignored
[08:42:53][Step 2/2] === RUN TestConstraintsCheck/NumReplicas_doesn't_affect_constraint_checking
[08:42:53][Step 2/2] === RUN TestConstraintsCheck/multiple_per-replica_constraints_are_respected
[08:42:53][Step 2/2] --- PASS: TestConstraintsCheck (0.01s)
[08:42:53][Step 2/2] --- PASS: TestConstraintsCheck/required_constraint (0.00s)
[08:42:53][Step 2/2] --- PASS: TestConstraintsCheck/required_locality_constraints (0.00s)
[08:42:53][Step 2/2] --- PASS: TestConstraintsCheck/prohibited_constraints (0.00s)
[08:42:53][Step 2/2] --- PASS: TestConstraintsCheck/prohibited_locality_constraints (0.00s)
[08:42:53][Step 2/2] --- PASS: TestConstraintsCheck/positive_constraints_are_ignored (0.00s)
[08:42:53][Step 2/2] --- PASS: TestConstraintsCheck/positive_locality_constraints_are_ignored (0.00s)
[08:42:53][Step 2/2] --- PASS: TestConstraintsCheck/NumReplicas_doesn't_affect_constraint_checking (0.00s)
[08:42:53][Step 2/2] --- PASS: TestConstraintsCheck/multiple_per-replica_constraints_are_respected (0.00s)
[08:42:53][Step 2/2] === RUN TestAllocateConstraintsCheck
[08:42:53][Step 2/2] === RUN TestAllocateConstraintsCheck/prohibited_constraint
[08:42:53][Step 2/2] === RUN TestAllocateConstraintsCheck/required_constraint
[08:42:53][Step 2/2] === RUN TestAllocateConstraintsCheck/required_constraint_with_NumReplicas
[08:42:53][Step 2/2] === RUN TestAllocateConstraintsCheck/multiple_required_constraints_with_NumReplicas
[08:42:53][Step 2/2] === RUN TestAllocateConstraintsCheck/multiple_required_constraints_with_NumReplicas_and_existing_replicas
[08:42:53][Step 2/2] === RUN TestAllocateConstraintsCheck/multiple_required_constraints_with_NumReplicas_and_not_enough_existing_replicas
[08:42:53][Step 2/2] === RUN TestAllocateConstraintsCheck/multiple_required_constraints_with_NumReplicas_and_sum(NumReplicas)_<_zone.NumReplicas
[08:42:53][Step 2/2] === RUN TestAllocateConstraintsCheck/multiple_required_constraints_with_sum(NumReplicas)_<_zone.NumReplicas_and_not_enough_existing_replicas
[08:42:53][Step 2/2] --- PASS: TestAllocateConstraintsCheck (0.01s)
[08:42:53][Step 2/2] --- PASS: TestAllocateConstraintsCheck/prohibited_constraint (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocateConstraintsCheck/required_constraint (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocateConstraintsCheck/required_constraint_with_NumReplicas (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocateConstraintsCheck/multiple_required_constraints_with_NumReplicas (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocateConstraintsCheck/multiple_required_constraints_with_NumReplicas_and_existing_replicas (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocateConstraintsCheck/multiple_required_constraints_with_NumReplicas_and_not_enough_existing_replicas (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocateConstraintsCheck/multiple_required_constraints_with_NumReplicas_and_sum(NumReplicas)_<_zone.NumReplicas (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocateConstraintsCheck/multiple_required_constraints_with_sum(NumReplicas)_<_zone.NumReplicas_and_not_enough_existing_replicas (0.00s)
[08:42:53][Step 2/2] === RUN TestRemoveConstraintsCheck
[08:42:53][Step 2/2] === RUN TestRemoveConstraintsCheck/prohibited_constraint
[08:42:53][Step 2/2] === RUN TestRemoveConstraintsCheck/required_constraint
[08:42:53][Step 2/2] === RUN TestRemoveConstraintsCheck/required_constraint_with_NumReplicas
[08:42:53][Step 2/2] === RUN TestRemoveConstraintsCheck/multiple_required_constraints_with_NumReplicas
[08:42:53][Step 2/2] === RUN TestRemoveConstraintsCheck/required_constraint_with_NumReplicas_and_sum(NumReplicas)_<_zone.NumReplicas
[08:42:53][Step 2/2] --- PASS: TestRemoveConstraintsCheck (0.01s)
[08:42:53][Step 2/2] --- PASS: TestRemoveConstraintsCheck/prohibited_constraint (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRemoveConstraintsCheck/required_constraint (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRemoveConstraintsCheck/required_constraint_with_NumReplicas (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRemoveConstraintsCheck/multiple_required_constraints_with_NumReplicas (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRemoveConstraintsCheck/required_constraint_with_NumReplicas_and_sum(NumReplicas)_<_zone.NumReplicas (0.00s)
[08:42:53][Step 2/2] === RUN TestShouldRebalanceDiversity
[08:42:53][Step 2/2] --- PASS: TestShouldRebalanceDiversity (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocateDiversityScore
[08:42:53][Step 2/2] === RUN TestAllocateDiversityScore/no_existing_replicas
[08:42:53][Step 2/2] === RUN TestAllocateDiversityScore/one_existing_replicas
[08:42:53][Step 2/2] === RUN TestAllocateDiversityScore/two_existing_replicas
[08:42:53][Step 2/2] --- PASS: TestAllocateDiversityScore (0.01s)
[08:42:53][Step 2/2] --- PASS: TestAllocateDiversityScore/no_existing_replicas (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocateDiversityScore/one_existing_replicas (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocateDiversityScore/two_existing_replicas (0.00s)
[08:42:53][Step 2/2] === RUN TestRebalanceToDiversityScore
[08:42:53][Step 2/2] === RUN TestRebalanceToDiversityScore/no_existing_replicas
[08:42:53][Step 2/2] === RUN TestRebalanceToDiversityScore/one_existing_replica
[08:42:53][Step 2/2] === RUN TestRebalanceToDiversityScore/two_existing_replicas
[08:42:53][Step 2/2] === RUN TestRebalanceToDiversityScore/three_existing_replicas
[08:42:53][Step 2/2] === RUN TestRebalanceToDiversityScore/three_existing_replicas_with_duplicate
[08:42:53][Step 2/2] === RUN TestRebalanceToDiversityScore/four_existing_replicas
[08:42:53][Step 2/2] === RUN TestRebalanceToDiversityScore/four_existing_replicas_with_duplicate
[08:42:53][Step 2/2] --- PASS: TestRebalanceToDiversityScore (0.01s)
[08:42:53][Step 2/2] --- PASS: TestRebalanceToDiversityScore/no_existing_replicas (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRebalanceToDiversityScore/one_existing_replica (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRebalanceToDiversityScore/two_existing_replicas (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRebalanceToDiversityScore/three_existing_replicas (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRebalanceToDiversityScore/three_existing_replicas_with_duplicate (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRebalanceToDiversityScore/four_existing_replicas (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRebalanceToDiversityScore/four_existing_replicas_with_duplicate (0.00s)
[08:42:53][Step 2/2] === RUN TestRemovalDiversityScore
[08:42:53][Step 2/2] === RUN TestRemovalDiversityScore/four_existing_replicas
[08:42:53][Step 2/2] === RUN TestRemovalDiversityScore/four_existing_replicas_with_duplicate
[08:42:53][Step 2/2] === RUN TestRemovalDiversityScore/three_existing_replicas_-_excluding_testStoreUSa15
[08:42:53][Step 2/2] === RUN TestRemovalDiversityScore/three_existing_replicas_-_excluding_testStoreUSa1
[08:42:53][Step 2/2] === RUN TestRemovalDiversityScore/three_existing_replicas_-_excluding_testStoreUSb
[08:42:53][Step 2/2] === RUN TestRemovalDiversityScore/three_existing_replicas_-_excluding_testStoreEurope
[08:42:53][Step 2/2] --- PASS: TestRemovalDiversityScore (0.01s)
[08:42:53][Step 2/2] --- PASS: TestRemovalDiversityScore/four_existing_replicas (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRemovalDiversityScore/four_existing_replicas_with_duplicate (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRemovalDiversityScore/three_existing_replicas_-_excluding_testStoreUSa15 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRemovalDiversityScore/three_existing_replicas_-_excluding_testStoreUSa1 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRemovalDiversityScore/three_existing_replicas_-_excluding_testStoreUSb (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRemovalDiversityScore/three_existing_replicas_-_excluding_testStoreEurope (0.00s)
[08:42:53][Step 2/2] === RUN TestDiversityScoreEquivalence
[08:42:53][Step 2/2] --- PASS: TestDiversityScoreEquivalence (0.01s)
[08:42:53][Step 2/2] === RUN TestBalanceScore
[08:42:53][Step 2/2] --- PASS: TestBalanceScore (0.00s)
[08:42:53][Step 2/2] === RUN TestRebalanceConvergesOnMean
[08:42:53][Step 2/2] --- PASS: TestRebalanceConvergesOnMean (0.01s)
[08:42:53][Step 2/2] === RUN TestMaxCapacity
[08:42:53][Step 2/2] --- PASS: TestMaxCapacity (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorSimpleRetrieval
[08:42:53][Step 2/2] --- PASS: TestAllocatorSimpleRetrieval (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorCorruptReplica
[08:42:53][Step 2/2] --- PASS: TestAllocatorCorruptReplica (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorNoAvailableDisks
[08:42:53][Step 2/2] --- PASS: TestAllocatorNoAvailableDisks (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorTwoDatacenters
[08:42:53][Step 2/2] --- PASS: TestAllocatorTwoDatacenters (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorExistingReplica
[08:42:53][Step 2/2] --- PASS: TestAllocatorExistingReplica (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorMultipleStoresPerNode
[08:42:53][Step 2/2] I181018 08:36:29.419350 299 storage/allocator_scorer.go:597 nodeHasReplica(n1, [(n1,s1):?])=true
[08:42:53][Step 2/2] I181018 08:36:29.420038 299 storage/allocator_scorer.go:597 nodeHasReplica(n1, [(n1,s2):? (n2,s3):?])=true
[08:42:53][Step 2/2] I181018 08:36:29.420145 299 storage/allocator_scorer.go:597 nodeHasReplica(n2, [(n1,s2):? (n2,s3):?])=true
[08:42:53][Step 2/2] I181018 08:36:29.420717 299 storage/allocator_scorer.go:597 nodeHasReplica(n3, [(n1,s2):? (n3,s6):?])=true
[08:42:53][Step 2/2] I181018 08:36:29.420785 299 storage/allocator_scorer.go:597 nodeHasReplica(n1, [(n1,s2):? (n3,s6):?])=true
[08:42:53][Step 2/2] I181018 08:36:29.421153 299 storage/allocator_scorer.go:597 nodeHasReplica(n1, [(n1,s1):? (n2,s3):? (n3,s5):?])=true
[08:42:53][Step 2/2] I181018 08:36:29.421221 299 storage/allocator_scorer.go:597 nodeHasReplica(n3, [(n1,s1):? (n2,s3):? (n3,s5):?])=true
[08:42:53][Step 2/2] I181018 08:36:29.421281 299 storage/allocator_scorer.go:597 nodeHasReplica(n2, [(n1,s1):? (n2,s3):? (n3,s5):?])=true
[08:42:53][Step 2/2] I181018 08:36:29.421482 299 storage/allocator_scorer.go:597 nodeHasReplica(n1, [(n1,s2):? (n2,s4):? (n3,s6):?])=true
[08:42:53][Step 2/2] I181018 08:36:29.421537 299 storage/allocator_scorer.go:597 nodeHasReplica(n2, [(n1,s2):? (n2,s4):? (n3,s6):?])=true
[08:42:53][Step 2/2] I181018 08:36:29.421609 299 storage/allocator_scorer.go:597 nodeHasReplica(n3, [(n1,s2):? (n2,s4):? (n3,s6):?])=true
[08:42:53][Step 2/2] --- PASS: TestAllocatorMultipleStoresPerNode (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorRebalance
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalance (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceTarget
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceTarget (0.02s)
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceDeadNodes
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceDeadNodes/#00
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceDeadNodes/#01
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceDeadNodes/#02
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceDeadNodes/#03
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceDeadNodes/#04
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceDeadNodes/#05
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceDeadNodes/#06
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceDeadNodes (0.01s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceDeadNodes/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceDeadNodes/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceDeadNodes/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceDeadNodes/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceDeadNodes/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceDeadNodes/#05 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceDeadNodes/#06 (0.00s)
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceThrashing
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceThrashing/balanced
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceThrashing/empty-node
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceThrashing/within-threshold
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceThrashing/5-stores-mean-100-one-above
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceThrashing/5-stores-mean-1000-one-above
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceThrashing/5-stores-mean-10000-one-above
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceThrashing/5-stores-mean-1000-one-underused
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceThrashing/10-stores-mean-1000-one-underused
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceThrashing (0.05s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceThrashing/balanced (0.01s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceThrashing/empty-node (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceThrashing/within-threshold (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceThrashing/5-stores-mean-100-one-above (0.01s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceThrashing/5-stores-mean-1000-one-above (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceThrashing/5-stores-mean-10000-one-above (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceThrashing/5-stores-mean-1000-one-underused (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceThrashing/10-stores-mean-1000-one-underused (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceByCount
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceByCount (0.02s)
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTarget
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTarget/#00
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTarget/#01
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTarget/#02
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTarget/#03
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTarget/#04
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTarget/#05
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTarget/#06
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTarget (0.01s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTarget/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTarget/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTarget/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTarget/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTarget/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTarget/#05 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTarget/#06 (0.00s)
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetDraining
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetDraining/#00
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetDraining/#01
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetDraining/#02
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetDraining/#03
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetDraining/#04
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetDraining/#05
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetDraining/#06
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetDraining (0.01s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetDraining/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetDraining/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetDraining/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetDraining/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetDraining/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetDraining/#05 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetDraining/#06 (0.00s)
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceDifferentLocalitySizes
[08:42:53][Step 2/2] I181018 08:36:29.580722 372 storage/allocator_test.go:1518 case #0
[08:42:53][Step 2/2] I181018 08:36:29.581291 372 storage/allocator_test.go:1518 case #1
[08:42:53][Step 2/2] I181018 08:36:29.581782 372 storage/allocator_test.go:1518 case #2
[08:42:53][Step 2/2] I181018 08:36:29.582238 372 storage/allocator_test.go:1518 case #3
[08:42:53][Step 2/2] I181018 08:36:29.582723 372 storage/allocator_test.go:1518 case #4
[08:42:53][Step 2/2] I181018 08:36:29.583309 372 storage/allocator_test.go:1518 case #5
[08:42:53][Step 2/2] I181018 08:36:29.583846 372 storage/allocator_test.go:1518 case #6
[08:42:53][Step 2/2] I181018 08:36:29.584385 372 storage/allocator_test.go:1518 case #7
[08:42:53][Step 2/2] I181018 08:36:29.584931 372 storage/allocator_test.go:1518 case #8
[08:42:53][Step 2/2] I181018 08:36:29.585420 372 storage/allocator_test.go:1518 case #9
[08:42:53][Step 2/2] I181018 08:36:29.585726 372 storage/allocator_test.go:1518 case #10
[08:42:53][Step 2/2] I181018 08:36:29.586015 372 storage/allocator_test.go:1518 case #11
[08:42:53][Step 2/2] I181018 08:36:29.586639 372 storage/allocator_test.go:1518 case #12
[08:42:53][Step 2/2] I181018 08:36:29.587271 372 storage/allocator_test.go:1518 case #13
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceDifferentLocalitySizes (0.03s)
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetMultiStore
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetMultiStore/#00
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetMultiStore/#01
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetMultiStore/#02
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetMultiStore/#03
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetMultiStore/#04
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetMultiStore/#05
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetMultiStore (0.01s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetMultiStore/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetMultiStore/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetMultiStore/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetMultiStore/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetMultiStore/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetMultiStore/#05 (0.00s)
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLease
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLease/#00
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLease/#01
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLease/#02
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLease/#03
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLease/#04
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLease/#05
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLease/#06
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLease/#07
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLease/#08
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLease/#09
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLease/#10
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLease/#11
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLease/#12
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLease (0.02s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLease/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLease/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLease/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLease/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLease/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLease/#05 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLease/#06 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLease/#07 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLease/#08 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLease/#09 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLease/#10 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLease/#11 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLease/#12 (0.00s)
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLeaseDraining
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLeaseDraining/#00
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLeaseDraining/#01
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLeaseDraining/#02
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLeaseDraining/#03
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLeaseDraining/#04
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLeaseDraining/#05
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLeaseDraining/#06
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLeaseDraining/#07
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLeaseDraining/#08
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLeaseDraining/#09
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLeaseDraining/#10
[08:42:53][Step 2/2] === RUN TestAllocatorShouldTransferLeaseDraining/#11
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLeaseDraining (0.01s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLeaseDraining/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLeaseDraining/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLeaseDraining/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLeaseDraining/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLeaseDraining/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLeaseDraining/#05 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLeaseDraining/#06 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLeaseDraining/#07 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLeaseDraining/#08 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLeaseDraining/#09 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLeaseDraining/#10 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorShouldTransferLeaseDraining/#11 (0.00s)
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#00
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#01
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#02
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#03
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#04
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#05
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#06
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#07
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#08
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#09
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#10
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#11
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#12
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#13
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#14
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#15
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#16
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#17
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#18
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#19
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#20
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#21
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#22
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#23
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#24
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#25
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#26
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#27
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#28
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#29
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#30
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#31
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#32
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#33
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#34
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#35
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#36
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#37
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#38
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#39
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#40
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#41
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#42
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#43
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#44
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#45
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferences/#46
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences (0.03s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#05 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#06 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#07 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#08 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#09 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#10 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#11 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#12 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#13 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#14 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#15 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#16 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#17 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#18 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#19 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#20 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#21 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#22 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#23 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#24 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#25 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#26 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#27 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#28 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#29 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#30 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#31 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#32 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#33 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#34 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#35 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#36 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#37 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#38 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#39 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#40 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#41 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#42 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#43 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#44 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#45 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferences/#46 (0.00s)
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferencesMultipleStoresPerLocality
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferencesMultipleStoresPerLocality/#00
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferencesMultipleStoresPerLocality/#01
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferencesMultipleStoresPerLocality/#02
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferencesMultipleStoresPerLocality/#03
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferencesMultipleStoresPerLocality/#04
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferencesMultipleStoresPerLocality/#05
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferencesMultipleStoresPerLocality/#06
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferencesMultipleStoresPerLocality/#07
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferencesMultipleStoresPerLocality/#08
[08:42:53][Step 2/2] === RUN TestAllocatorLeasePreferencesMultipleStoresPerLocality/#09
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferencesMultipleStoresPerLocality (0.02s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferencesMultipleStoresPerLocality/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferencesMultipleStoresPerLocality/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferencesMultipleStoresPerLocality/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferencesMultipleStoresPerLocality/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferencesMultipleStoresPerLocality/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferencesMultipleStoresPerLocality/#05 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferencesMultipleStoresPerLocality/#06 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferencesMultipleStoresPerLocality/#07 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferencesMultipleStoresPerLocality/#08 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorLeasePreferencesMultipleStoresPerLocality/#09 (0.00s)
[08:42:53][Step 2/2] === RUN TestAllocatorRemoveTargetLocality
[08:42:53][Step 2/2] --- PASS: TestAllocatorRemoveTargetLocality (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorAllocateTargetLocality
[08:42:53][Step 2/2] --- PASS: TestAllocatorAllocateTargetLocality (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceTargetLocality
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceTargetLocality (0.02s)
[08:42:53][Step 2/2] === RUN TestAllocateCandidatesNumReplicasConstraints
[08:42:53][Step 2/2] --- PASS: TestAllocateCandidatesNumReplicasConstraints (0.02s)
[08:42:53][Step 2/2] === RUN TestRemoveCandidatesNumReplicasConstraints
[08:42:53][Step 2/2] --- PASS: TestRemoveCandidatesNumReplicasConstraints (0.01s)
[08:42:53][Step 2/2] === RUN TestRebalanceCandidatesNumReplicasConstraints
[08:42:53][Step 2/2] --- PASS: TestRebalanceCandidatesNumReplicasConstraints (0.05s)
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased
[08:42:53][Step 2/2] I181018 08:36:29.816049 350 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"1" > attrs:<> locality:<tiers:<key:"l" value:"1" > > ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:36:29.816684 350 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"2" > attrs:<> locality:<tiers:<key:"l" value:"2" > > ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:36:29.817168 350 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"3" > attrs:<> locality:<tiers:<key:"l" value:"3" > > ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#00
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#01
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#02
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#03
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#04
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#05
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#06
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#07
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#08
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#09
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#10
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#11
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#12
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#13
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#14
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#15
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#16
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#17
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#18
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#19
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#20
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#21
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#22
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#23
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#24
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#25
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#26
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#27
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#28
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#29
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#30
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#31
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#32
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#33
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#34
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#35
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#36
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#37
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#38
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#39
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#40
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#41
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#42
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#43
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#44
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#45
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#46
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#47
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#48
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#49
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#50
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#51
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#52
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#53
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#54
[08:42:53][Step 2/2] === RUN TestAllocatorTransferLeaseTargetLoadBased/#55
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased (0.08s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#05 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#06 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#07 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#08 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#09 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#10 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#11 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#12 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#13 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#14 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#15 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#16 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#17 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#18 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#19 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#20 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#21 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#22 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#23 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#24 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#25 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#26 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#27 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#28 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#29 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#30 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#31 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#32 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#33 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#34 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#35 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#36 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#37 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#38 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#39 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#40 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#41 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#42 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#43 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#44 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#45 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#46 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#47 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#48 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#49 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#50 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#51 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#52 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#53 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#54 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorTransferLeaseTargetLoadBased/#55 (0.00s)
[08:42:53][Step 2/2] === RUN TestLoadBasedLeaseRebalanceScore
[08:42:53][Step 2/2] --- PASS: TestLoadBasedLeaseRebalanceScore (0.00s)
[08:42:53][Step 2/2] === RUN TestAllocatorRemoveTarget
[08:42:53][Step 2/2] --- PASS: TestAllocatorRemoveTarget (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorComputeAction
[08:42:53][Step 2/2] --- PASS: TestAllocatorComputeAction (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorComputeActionRemoveDead
[08:42:53][Step 2/2] --- PASS: TestAllocatorComputeActionRemoveDead (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorComputeActionDecommission
[08:42:53][Step 2/2] --- PASS: TestAllocatorComputeActionDecommission (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorComputeActionDynamicNumReplicas
[08:42:53][Step 2/2] --- PASS: TestAllocatorComputeActionDynamicNumReplicas (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorGetNeededReplicas
[08:42:53][Step 2/2] --- PASS: TestAllocatorGetNeededReplicas (0.00s)
[08:42:53][Step 2/2] === RUN TestAllocatorComputeActionNoStorePool
[08:42:53][Step 2/2] --- PASS: TestAllocatorComputeActionNoStorePool (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorError
[08:42:53][Step 2/2] --- PASS: TestAllocatorError (0.01s)
[08:42:53][Step 2/2] === RUN TestAllocatorThrottled
[08:42:53][Step 2/2] --- PASS: TestAllocatorThrottled (0.01s)
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#00
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#01
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#02
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#03
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#04
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#05
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#06
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#07
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#08
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#09
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#10
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#11
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#12
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#13
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#14
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#15
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#16
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#17
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#18
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#19
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#20
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#21
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#22
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#23
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#24
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#25
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#26
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#27
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#28
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#29
[08:42:53][Step 2/2] === RUN TestFilterBehindReplicas/#30
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas (0.02s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#05 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#06 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#07 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#08 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#09 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#10 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#11 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#12 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#13 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#14 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#15 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#16 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#17 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#18 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#19 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#20 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#21 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#22 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#23 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#24 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#25 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#26 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#27 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#28 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#29 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterBehindReplicas/#30 (0.00s)
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#00
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#01
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#02
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#03
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#04
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#05
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#06
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#07
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#08
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#09
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#10
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#11
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#12
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#13
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#14
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#15
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#16
[08:42:53][Step 2/2] === RUN TestFilterUnremovableReplicas/#17
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas (0.01s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#05 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#06 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#07 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#08 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#09 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#10 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#11 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#12 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#13 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#14 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#15 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#16 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestFilterUnremovableReplicas/#17 (0.00s)
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceAway
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceAway/+datacenter=us
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceAway/-datacenter=eur
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceAway/+datacenter=eur
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceAway/-datacenter=us
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceAway/+datacenter=other
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceAway/-datacenter=other
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceAway/datacenter=other
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceAway/datacenter=us
[08:42:53][Step 2/2] === RUN TestAllocatorRebalanceAway/datacenter=eur
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceAway (0.01s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceAway/+datacenter=us (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceAway/-datacenter=eur (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceAway/+datacenter=eur (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceAway/-datacenter=us (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceAway/+datacenter=other (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceAway/-datacenter=other (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceAway/datacenter=other (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceAway/datacenter=us (0.00s)
[08:42:53][Step 2/2] --- PASS: TestAllocatorRebalanceAway/datacenter=eur (0.00s)
[08:42:53][Step 2/2] === RUN TestAllocatorFullDisks
[08:42:53][Step 2/2] I181018 08:36:30.031880 642 storage/allocator_test.go:5373 s18 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.032730 642 storage/allocator_test.go:5373 s15 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.033508 642 storage/allocator_test.go:5373 s18 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.035518 642 storage/allocator_test.go:5373 s15 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.037884 642 storage/allocator_test.go:5373 s7 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.038419 642 storage/allocator_test.go:5373 s8 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.038956 642 storage/allocator_test.go:5373 s7 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.040531 642 storage/allocator_test.go:5373 s2 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.051147 642 storage/allocator_test.go:5373 s9 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.052771 642 storage/allocator_test.go:5373 s5 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.053697 642 storage/allocator_test.go:5373 s5 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.055899 642 storage/allocator_test.go:5373 s19 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.057701 642 storage/allocator_test.go:5373 s11 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.058552 642 storage/allocator_test.go:5373 s11 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.074544 642 storage/allocator_test.go:5373 s18 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.077327 642 storage/allocator_test.go:5373 s15 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.082442 642 storage/allocator_test.go:5373 s10 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.085032 642 storage/allocator_test.go:5373 s18 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.105968 642 storage/allocator_test.go:5373 s7 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.113910 642 storage/allocator_test.go:5373 s9 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.115019 642 storage/allocator_test.go:5373 s10 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.115973 642 storage/allocator_test.go:5373 s9 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.116839 642 storage/allocator_test.go:5373 s6 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.118498 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.119362 642 storage/allocator_test.go:5373 s5 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.121784 642 storage/allocator_test.go:5373 s15 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.122845 642 storage/allocator_test.go:5373 s10 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.125121 642 storage/allocator_test.go:5373 s6 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.126820 642 storage/allocator_test.go:5373 s4 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.127717 642 storage/allocator_test.go:5373 s11 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.128458 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.129291 642 storage/allocator_test.go:5373 s11 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.130130 642 storage/allocator_test.go:5373 s15 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.130993 642 storage/allocator_test.go:5373 s15 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.133027 642 storage/allocator_test.go:5373 s9 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.133860 642 storage/allocator_test.go:5373 s5 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.136290 642 storage/allocator_test.go:5373 s6 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.137237 642 storage/allocator_test.go:5373 s10 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.138435 642 storage/allocator_test.go:5373 s5 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.139704 642 storage/allocator_test.go:5373 s11 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.140532 642 storage/allocator_test.go:5373 s12 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.141346 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.146258 642 storage/allocator_test.go:5373 s0 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.151724 642 storage/allocator_test.go:5373 s3 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.153123 642 storage/allocator_test.go:5373 s14 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.154693 642 storage/allocator_test.go:5373 s3 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.155294 642 storage/allocator_test.go:5373 s13 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.156064 642 storage/allocator_test.go:5373 s18 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.156792 642 storage/allocator_test.go:5373 s1 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.157514 642 storage/allocator_test.go:5373 s13 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.158353 642 storage/allocator_test.go:5373 s1 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.159219 642 storage/allocator_test.go:5373 s13 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.160839 642 storage/allocator_test.go:5373 s3 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.163640 642 storage/allocator_test.go:5373 s7 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.164437 642 storage/allocator_test.go:5373 s8 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.165247 642 storage/allocator_test.go:5373 s1 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.166835 642 storage/allocator_test.go:5373 s1 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.167621 642 storage/allocator_test.go:5373 s16 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.168355 642 storage/allocator_test.go:5373 s16 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.169145 642 storage/allocator_test.go:5373 s13 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.170675 642 storage/allocator_test.go:5373 s0 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.182423 642 storage/allocator_test.go:5373 s7 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.184374 642 storage/allocator_test.go:5373 s8 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.186351 642 storage/allocator_test.go:5373 s2 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.187199 642 storage/allocator_test.go:5373 s18 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.188091 642 storage/allocator_test.go:5373 s2 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.190620 642 storage/allocator_test.go:5373 s18 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.193381 642 storage/allocator_test.go:5373 s19 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.193938 642 storage/allocator_test.go:5373 s11 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.194682 642 storage/allocator_test.go:5373 s15 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.195204 642 storage/allocator_test.go:5373 s12 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.195951 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.196479 642 storage/allocator_test.go:5373 s12 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.197936 642 storage/allocator_test.go:5373 s9 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.198964 642 storage/allocator_test.go:5373 s12 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.199765 642 storage/allocator_test.go:5373 s14 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.200615 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.201144 642 storage/allocator_test.go:5373 s12 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.201917 642 storage/allocator_test.go:5373 s6 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.202486 642 storage/allocator_test.go:5373 s17 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.203316 642 storage/allocator_test.go:5373 s9 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.203926 642 storage/allocator_test.go:5373 s2 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.205370 642 storage/allocator_test.go:5373 s19 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.206384 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.207208 642 storage/allocator_test.go:5373 s12 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.208533 642 storage/allocator_test.go:5373 s4 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.210324 642 storage/allocator_test.go:5373 s4 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.221259 642 storage/allocator_test.go:5373 s19 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.224112 642 storage/allocator_test.go:5373 s16 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.226056 642 storage/allocator_test.go:5373 s3 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.227182 642 storage/allocator_test.go:5373 s0 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.227938 642 storage/allocator_test.go:5373 s18 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.228717 642 storage/allocator_test.go:5373 s7 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.229827 642 storage/allocator_test.go:5373 s16 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.230559 642 storage/allocator_test.go:5373 s0 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.231781 642 storage/allocator_test.go:5373 s16 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.232611 642 storage/allocator_test.go:5373 s5 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.233807 642 storage/allocator_test.go:5373 s7 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.235001 642 storage/allocator_test.go:5373 s0 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.241936 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.242755 642 storage/allocator_test.go:5373 s5 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.243355 642 storage/allocator_test.go:5373 s8 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.244019 642 storage/allocator_test.go:5373 s6 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.244743 642 storage/allocator_test.go:5373 s8 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.245890 642 storage/allocator_test.go:5373 s5 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.247256 642 storage/allocator_test.go:5373 s8 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.247788 642 storage/allocator_test.go:5373 s10 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.248478 642 storage/allocator_test.go:5373 s5 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.249393 642 storage/allocator_test.go:5373 s10 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.250158 642 storage/allocator_test.go:5373 s10 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.251181 642 storage/allocator_test.go:5373 s10 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.252135 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.252684 642 storage/allocator_test.go:5373 s10 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.253213 642 storage/allocator_test.go:5373 s10 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.253784 642 storage/allocator_test.go:5373 s6 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.254281 642 storage/allocator_test.go:5373 s9 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.255275 642 storage/allocator_test.go:5373 s8 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.256497 642 storage/allocator_test.go:5373 s5 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.257039 642 storage/allocator_test.go:5373 s5 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.257518 642 storage/allocator_test.go:5373 s8 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.258278 642 storage/allocator_test.go:5373 s17 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.258777 642 storage/allocator_test.go:5373 s5 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.260240 642 storage/allocator_test.go:5373 s6 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.261823 642 storage/allocator_test.go:5373 s8 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.263452 642 storage/allocator_test.go:5373 s8 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.264898 642 storage/allocator_test.go:5373 s10 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.268387 642 storage/allocator_test.go:5373 s0 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.269529 642 storage/allocator_test.go:5373 s1 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.271968 642 storage/allocator_test.go:5373 s3 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.273690 642 storage/allocator_test.go:5373 s3 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.275485 642 storage/allocator_test.go:5373 s5 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.276613 642 storage/allocator_test.go:5373 s10 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.277720 642 storage/allocator_test.go:5373 s4 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.279700 642 storage/allocator_test.go:5373 s3 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.280732 642 storage/allocator_test.go:5373 s11 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.282091 642 storage/allocator_test.go:5373 s15 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.284222 642 storage/allocator_test.go:5373 s18 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.285341 642 storage/allocator_test.go:5373 s4 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.285929 642 storage/allocator_test.go:5373 s15 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.286783 642 storage/allocator_test.go:5373 s11 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.287551 642 storage/allocator_test.go:5373 s1 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.288367 642 storage/allocator_test.go:5373 s18 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.290787 642 storage/allocator_test.go:5373 s11 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.291809 642 storage/allocator_test.go:5373 s11 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.292300 642 storage/allocator_test.go:5373 s11 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.293103 642 storage/allocator_test.go:5373 s18 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.293897 642 storage/allocator_test.go:5373 s13 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.294758 642 storage/allocator_test.go:5373 s1 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.299421 642 storage/allocator_test.go:5373 s13 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.301362 642 storage/allocator_test.go:5373 s2 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.302233 642 storage/allocator_test.go:5373 s13 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.303669 642 storage/allocator_test.go:5373 s18 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.306040 642 storage/allocator_test.go:5373 s11 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.308101 642 storage/allocator_test.go:5373 s13 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.321966 642 storage/allocator_test.go:5373 s7 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.322763 642 storage/allocator_test.go:5373 s12 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.323881 642 storage/allocator_test.go:5373 s0 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.325487 642 storage/allocator_test.go:5373 s7 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.326653 642 storage/allocator_test.go:5373 s2 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.327434 642 storage/allocator_test.go:5373 s2 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.328407 642 storage/allocator_test.go:5373 s0 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.330038 642 storage/allocator_test.go:5373 s4 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.331145 642 storage/allocator_test.go:5373 s15 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.332056 642 storage/allocator_test.go:5373 s19 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.333025 642 storage/allocator_test.go:5373 s0 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.334265 642 storage/allocator_test.go:5373 s19 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.335160 642 storage/allocator_test.go:5373 s6 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.335837 642 storage/allocator_test.go:5373 s15 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.336795 642 storage/allocator_test.go:5373 s2 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.338501 642 storage/allocator_test.go:5373 s7 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.340098 642 storage/allocator_test.go:5373 s7 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.341625 642 storage/allocator_test.go:5373 s14 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.342933 642 storage/allocator_test.go:5373 s3 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.344694 642 storage/allocator_test.go:5373 s15 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.345940 642 storage/allocator_test.go:5373 s1 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.346712 642 storage/allocator_test.go:5373 s3 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.347968 642 storage/allocator_test.go:5373 s3 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.351234 642 storage/allocator_test.go:5373 s16 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.352976 642 storage/allocator_test.go:5373 s1 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.354067 642 storage/allocator_test.go:5373 s1 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.355199 642 storage/allocator_test.go:5373 s18 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.355961 642 storage/allocator_test.go:5373 s12 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.357113 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.359179 642 storage/allocator_test.go:5373 s13 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.360193 642 storage/allocator_test.go:5373 s3 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.361948 642 storage/allocator_test.go:5373 s13 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.362531 642 storage/allocator_test.go:5373 s9 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.363934 642 storage/allocator_test.go:5373 s18 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.371051 642 storage/allocator_test.go:5373 s12 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.371943 642 storage/allocator_test.go:5373 s14 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.372468 642 storage/allocator_test.go:5373 s14 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.373002 642 storage/allocator_test.go:5373 s17 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.373709 642 storage/allocator_test.go:5373 s14 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.374839 642 storage/allocator_test.go:5373 s8 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.376378 642 storage/allocator_test.go:5373 s14 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.377375 642 storage/allocator_test.go:5373 s11 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.378162 642 storage/allocator_test.go:5373 s14 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.378699 642 storage/allocator_test.go:5373 s9 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.379481 642 storage/allocator_test.go:5373 s12 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.380027 642 storage/allocator_test.go:5373 s17 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.380535 642 storage/allocator_test.go:5373 s14 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.381350 642 storage/allocator_test.go:5373 s8 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.381887 642 storage/allocator_test.go:5373 s9 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.382516 642 storage/allocator_test.go:5373 s17 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.383044 642 storage/allocator_test.go:5373 s11 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.383933 642 storage/allocator_test.go:5373 s8 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.385090 642 storage/allocator_test.go:5373 s14 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.386216 642 storage/allocator_test.go:5373 s11 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.387013 642 storage/allocator_test.go:5373 s14 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.387680 642 storage/allocator_test.go:5373 s8 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.388693 642 storage/allocator_test.go:5373 s11 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.389311 642 storage/allocator_test.go:5373 s11 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.389875 642 storage/allocator_test.go:5373 s11 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.391417 642 storage/allocator_test.go:5373 s17 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.392232 642 storage/allocator_test.go:5373 s11 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.393069 642 storage/allocator_test.go:5373 s0 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.394004 642 storage/allocator_test.go:5373 s0 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.395549 642 storage/allocator_test.go:5373 s11 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.396421 642 storage/allocator_test.go:5373 s4 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.397349 642 storage/allocator_test.go:5373 s3 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.398242 642 storage/allocator_test.go:5373 s4 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.399538 642 storage/allocator_test.go:5373 s0 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.401355 642 storage/allocator_test.go:5373 s10 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.405953 642 storage/allocator_test.go:5373 s0 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.407763 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.409065 642 storage/allocator_test.go:5373 s10 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.409933 642 storage/allocator_test.go:5373 s7 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.413537 642 storage/allocator_test.go:5373 s15 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.414697 642 storage/allocator_test.go:5373 s2 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.415867 642 storage/allocator_test.go:5373 s13 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.416497 642 storage/allocator_test.go:5373 s13 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.417327 642 storage/allocator_test.go:5373 s2 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.418375 642 storage/allocator_test.go:5373 s10 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.421302 642 storage/allocator_test.go:5373 s15 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.421861 642 storage/allocator_test.go:5373 s4 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.422658 642 storage/allocator_test.go:5373 s10 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.423198 642 storage/allocator_test.go:5373 s18 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.424124 642 storage/allocator_test.go:5373 s15 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.425348 642 storage/allocator_test.go:5373 s10 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.432096 642 storage/allocator_test.go:5373 s18 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.433154 642 storage/allocator_test.go:5373 s19 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.434688 642 storage/allocator_test.go:5373 s9 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.435861 642 storage/allocator_test.go:5373 s18 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.437514 642 storage/allocator_test.go:5373 s19 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.439434 642 storage/allocator_test.go:5373 s12 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.441659 642 storage/allocator_test.go:5373 s12 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.443585 642 storage/allocator_test.go:5373 s9 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.445556 642 storage/allocator_test.go:5373 s7 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.448678 642 storage/allocator_test.go:5373 s16 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.449434 642 storage/allocator_test.go:5373 s9 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.451091 642 storage/allocator_test.go:5373 s6 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.451945 642 storage/allocator_test.go:5373 s5 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.453501 642 storage/allocator_test.go:5373 s9 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.454707 642 storage/allocator_test.go:5373 s19 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.455911 642 storage/allocator_test.go:5373 s12 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.457076 642 storage/allocator_test.go:5373 s12 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.459798 642 storage/allocator_test.go:5373 s12 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.460593 642 storage/allocator_test.go:5373 s9 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.462172 642 storage/allocator_test.go:5373 s9 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.463120 642 storage/allocator_test.go:5373 s8 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.464856 642 storage/allocator_test.go:5373 s16 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.466122 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.468123 642 storage/allocator_test.go:5373 s12 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.469999 642 storage/allocator_test.go:5373 s0 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.474690 642 storage/allocator_test.go:5373 s4 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.475896 642 storage/allocator_test.go:5373 s13 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.478464 642 storage/allocator_test.go:5373 s2 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.480727 642 storage/allocator_test.go:5373 s19 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.483391 642 storage/allocator_test.go:5373 s4 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.487724 642 storage/allocator_test.go:5373 s4 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.488513 642 storage/allocator_test.go:5373 s4 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.489290 642 storage/allocator_test.go:5373 s18 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.490063 642 storage/allocator_test.go:5373 s2 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.491202 642 storage/allocator_test.go:5373 s18 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.491952 642 storage/allocator_test.go:5373 s2 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.492705 642 storage/allocator_test.go:5373 s2 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.493425 642 storage/allocator_test.go:5373 s4 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.494560 642 storage/allocator_test.go:5373 s15 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.497303 642 storage/allocator_test.go:5373 s2 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.502637 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.503156 642 storage/allocator_test.go:5373 s1 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.503678 642 storage/allocator_test.go:5373 s10 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.504862 642 storage/allocator_test.go:5373 s7 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.505354 642 storage/allocator_test.go:5373 s13 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.506323 642 storage/allocator_test.go:5373 s13 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.508145 642 storage/allocator_test.go:5373 s15 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.509061 642 storage/allocator_test.go:5373 s1 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.509613 642 storage/allocator_test.go:5373 s7 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.510095 642 storage/allocator_test.go:5373 s15 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.510618 642 storage/allocator_test.go:5373 s15 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.511979 642 storage/allocator_test.go:5373 s6 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.512794 642 storage/allocator_test.go:5373 s1 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.513553 642 storage/allocator_test.go:5373 s6 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.515554 642 storage/allocator_test.go:5373 s10 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.516363 642 storage/allocator_test.go:5373 s1 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.517895 642 storage/allocator_test.go:5373 s13 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.519132 642 storage/allocator_test.go:5373 s1 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.520283 642 storage/allocator_test.go:5373 s10 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.521083 642 storage/allocator_test.go:5373 s13 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.521874 642 storage/allocator_test.go:5373 s1 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.522862 642 storage/allocator_test.go:5373 s15 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.525528 642 storage/allocator_test.go:5373 s15 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.526791 642 storage/allocator_test.go:5373 s15 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.528016 642 storage/allocator_test.go:5373 s6 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.531363 642 storage/allocator_test.go:5373 s15 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.532482 642 storage/allocator_test.go:5373 s3 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.535115 642 storage/allocator_test.go:5373 s11 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.537093 642 storage/allocator_test.go:5373 s13 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.538945 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.541484 642 storage/allocator_test.go:5373 s8 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.543300 642 storage/allocator_test.go:5373 s3 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.544825 642 storage/allocator_test.go:5373 s16 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.546494 642 storage/allocator_test.go:5373 s8 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.548215 642 storage/allocator_test.go:5373 s17 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.549488 642 storage/allocator_test.go:5373 s18 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.550343 642 storage/allocator_test.go:5373 s19 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.551970 642 storage/allocator_test.go:5373 s16 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.553608 642 storage/allocator_test.go:5373 s18 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.554927 642 storage/allocator_test.go:5373 s14 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.559386 642 storage/allocator_test.go:5373 s16 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.560998 642 storage/allocator_test.go:5373 s8 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.562239 642 storage/allocator_test.go:5373 s8 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.563040 642 storage/allocator_test.go:5373 s18 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.564665 642 storage/allocator_test.go:5373 s17 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.566286 642 storage/allocator_test.go:5373 s19 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.567490 642 storage/allocator_test.go:5373 s14 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.570707 642 storage/allocator_test.go:5373 s18 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.573453 642 storage/allocator_test.go:5373 s16 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.575481 642 storage/allocator_test.go:5373 s19 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.576682 642 storage/allocator_test.go:5373 s18 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.578512 642 storage/allocator_test.go:5373 s18 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.579608 642 storage/allocator_test.go:5373 s12 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.580810 642 storage/allocator_test.go:5373 s6 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.582684 642 storage/allocator_test.go:5373 s17 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.583929 642 storage/allocator_test.go:5373 s14 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.586482 642 storage/allocator_test.go:5373 s9 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.586980 642 storage/allocator_test.go:5373 s10 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.591191 642 storage/allocator_test.go:5373 s9 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.593660 642 storage/allocator_test.go:5373 s4 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.594813 642 storage/allocator_test.go:5373 s4 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.595642 642 storage/allocator_test.go:5373 s4 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.596412 642 storage/allocator_test.go:5373 s4 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.596929 642 storage/allocator_test.go:5373 s6 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.597418 642 storage/allocator_test.go:5373 s6 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.597953 642 storage/allocator_test.go:5373 s4 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.598464 642 storage/allocator_test.go:5373 s4 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.600501 642 storage/allocator_test.go:5373 s7 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.602232 642 storage/allocator_test.go:5373 s9 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.603130 642 storage/allocator_test.go:5373 s6 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.603715 642 storage/allocator_test.go:5373 s10 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.604250 642 storage/allocator_test.go:5373 s4 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.604852 642 storage/allocator_test.go:5373 s10 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.605468 642 storage/allocator_test.go:5373 s9 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.606098 642 storage/allocator_test.go:5373 s10 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.607125 642 storage/allocator_test.go:5373 s4 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.612215 642 storage/allocator_test.go:5373 s9 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.615731 642 storage/allocator_test.go:5373 s0 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.617282 642 storage/allocator_test.go:5373 s9 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.618321 642 storage/allocator_test.go:5373 s9 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.619460 642 storage/allocator_test.go:5373 s2 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.620604 642 storage/allocator_test.go:5373 s15 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.621481 642 storage/allocator_test.go:5373 s17 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.623302 642 storage/allocator_test.go:5373 s13 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.624054 642 storage/allocator_test.go:5373 s19 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.625012 642 storage/allocator_test.go:5373 s0 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.628508 642 storage/allocator_test.go:5373 s17 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.630601 642 storage/allocator_test.go:5373 s13 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.632077 642 storage/allocator_test.go:5373 s8 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.639605 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.640778 642 storage/allocator_test.go:5373 s19 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.641556 642 storage/allocator_test.go:5373 s16 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.642316 642 storage/allocator_test.go:5373 s17 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.643454 642 storage/allocator_test.go:5373 s14 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.643963 642 storage/allocator_test.go:5373 s1 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.644764 642 storage/allocator_test.go:5373 s16 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.645791 642 storage/allocator_test.go:5373 s17 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.648623 642 storage/allocator_test.go:5373 s19 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.649697 642 storage/allocator_test.go:5373 s8 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.650350 642 storage/allocator_test.go:5373 s16 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.651131 642 storage/allocator_test.go:5373 s17 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.652623 642 storage/allocator_test.go:5373 s1 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.653392 642 storage/allocator_test.go:5373 s17 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.654605 642 storage/allocator_test.go:5373 s16 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.656131 642 storage/allocator_test.go:5373 s0 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.660138 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.661502 642 storage/allocator_test.go:5373 s14 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.662368 642 storage/allocator_test.go:5373 s2 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.663231 642 storage/allocator_test.go:5373 s17 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.665053 642 storage/allocator_test.go:5373 s6 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.665902 642 storage/allocator_test.go:5373 s2 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.666724 642 storage/allocator_test.go:5373 s19 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.667609 642 storage/allocator_test.go:5373 s19 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.668406 642 storage/allocator_test.go:5373 s6 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.672351 642 storage/allocator_test.go:5373 s2 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.674944 642 storage/allocator_test.go:5373 s2 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.677792 642 storage/allocator_test.go:5373 s12 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.679144 642 storage/allocator_test.go:5373 s2 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.679992 642 storage/allocator_test.go:5373 s2 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.681471 642 storage/allocator_test.go:5373 s2 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.682452 642 storage/allocator_test.go:5373 s5 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.683209 642 storage/allocator_test.go:5373 s13 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.684303 642 storage/allocator_test.go:5373 s12 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.685802 642 storage/allocator_test.go:5373 s6 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.686784 642 storage/allocator_test.go:5373 s5 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.687587 642 storage/allocator_test.go:5373 s6 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.689807 642 storage/allocator_test.go:5373 s12 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.690698 642 storage/allocator_test.go:5373 s6 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.691225 642 storage/allocator_test.go:5373 s5 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.691974 642 storage/allocator_test.go:5373 s15 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.694623 642 storage/allocator_test.go:5373 s6 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.697133 642 storage/allocator_test.go:5373 s18 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.698875 642 storage/allocator_test.go:5373 s12 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.702351 642 storage/allocator_test.go:5373 s18 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.703889 642 storage/allocator_test.go:5373 s14 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.704806 642 storage/allocator_test.go:5373 s7 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.706320 642 storage/allocator_test.go:5373 s11 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.708724 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.710471 642 storage/allocator_test.go:5373 s14 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.711676 642 storage/allocator_test.go:5373 s14 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.713471 642 storage/allocator_test.go:5373 s7 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.715204 642 storage/allocator_test.go:5373 s9 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.715777 642 storage/allocator_test.go:5373 s7 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.717084 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.718113 642 storage/allocator_test.go:5373 s14 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.718634 642 storage/allocator_test.go:5373 s14 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.720332 642 storage/allocator_test.go:5373 s14 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.721863 642 storage/allocator_test.go:5373 s14 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.722532 642 storage/allocator_test.go:5373 s14 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.724887 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.726893 642 storage/allocator_test.go:5373 s11 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.727839 642 storage/allocator_test.go:5373 s7 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.729234 642 storage/allocator_test.go:5373 s5 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.732174 642 storage/allocator_test.go:5373 s14 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.735524 642 storage/allocator_test.go:5373 s8 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.736646 642 storage/allocator_test.go:5373 s8 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.739217 642 storage/allocator_test.go:5373 s18 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.741158 642 storage/allocator_test.go:5373 s17 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.741958 642 storage/allocator_test.go:5373 s17 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.744974 642 storage/allocator_test.go:5373 s5 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.747169 642 storage/allocator_test.go:5373 s17 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.756865 642 storage/allocator_test.go:5373 s13 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.757713 642 storage/allocator_test.go:5373 s18 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.758662 642 storage/allocator_test.go:5373 s15 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.760221 642 storage/allocator_test.go:5373 s18 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.761008 642 storage/allocator_test.go:5373 s15 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.761843 642 storage/allocator_test.go:5373 s12 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.762649 642 storage/allocator_test.go:5373 s12 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.763478 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.765461 642 storage/allocator_test.go:5373 s15 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.766668 642 storage/allocator_test.go:5373 s18 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.767491 642 storage/allocator_test.go:5373 s0 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.769452 642 storage/allocator_test.go:5373 s15 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.770249 642 storage/allocator_test.go:5373 s18 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.771094 642 storage/allocator_test.go:5373 s18 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.772708 642 storage/allocator_test.go:5373 s15 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.773472 642 storage/allocator_test.go:5373 s0 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.775436 642 storage/allocator_test.go:5373 s12 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.776217 642 storage/allocator_test.go:5373 s18 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.777036 642 storage/allocator_test.go:5373 s18 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.778314 642 storage/allocator_test.go:5373 s18 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.779087 642 storage/allocator_test.go:5373 s18 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.779647 642 storage/allocator_test.go:5373 s11 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.781945 642 storage/allocator_test.go:5373 s18 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.783153 642 storage/allocator_test.go:5373 s15 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.784361 642 storage/allocator_test.go:5373 s18 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.786500 642 storage/allocator_test.go:5373 s3 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.787339 642 storage/allocator_test.go:5373 s18 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.788125 642 storage/allocator_test.go:5373 s2 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.788933 642 storage/allocator_test.go:5373 s18 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.789769 642 storage/allocator_test.go:5373 s3 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.793167 642 storage/allocator_test.go:5373 s6 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.796057 642 storage/allocator_test.go:5373 s9 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.800406 642 storage/allocator_test.go:5373 s3 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.802481 642 storage/allocator_test.go:5373 s2 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.803763 642 storage/allocator_test.go:5373 s16 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.806953 642 storage/allocator_test.go:5373 s2 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.809408 642 storage/allocator_test.go:5373 s10 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.810602 642 storage/allocator_test.go:5373 s16 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.811428 642 storage/allocator_test.go:5373 s6 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.813047 642 storage/allocator_test.go:5373 s10 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.814228 642 storage/allocator_test.go:5373 s9 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.817410 642 storage/allocator_test.go:5373 s6 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.819884 642 storage/allocator_test.go:5373 s1 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.821116 642 storage/allocator_test.go:5373 s1 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.821914 642 storage/allocator_test.go:5373 s1 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.823516 642 storage/allocator_test.go:5373 s1 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.824746 642 storage/allocator_test.go:5373 s3 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.831055 642 storage/allocator_test.go:5373 s6 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.833945 642 storage/allocator_test.go:5373 s11 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.835342 642 storage/allocator_test.go:5373 s0 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.836205 642 storage/allocator_test.go:5373 s7 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.838459 642 storage/allocator_test.go:5373 s5 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.840667 642 storage/allocator_test.go:5373 s17 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.841444 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.842918 642 storage/allocator_test.go:5373 s8 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.845158 642 storage/allocator_test.go:5373 s17 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.848606 642 storage/allocator_test.go:5373 s17 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.849178 642 storage/allocator_test.go:5373 s17 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.850187 642 storage/allocator_test.go:5373 s19 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.851202 642 storage/allocator_test.go:5373 s19 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.851691 642 storage/allocator_test.go:5373 s8 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.852147 642 storage/allocator_test.go:5373 s17 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.853131 642 storage/allocator_test.go:5373 s8 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.855306 642 storage/allocator_test.go:5373 s8 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.855826 642 storage/allocator_test.go:5373 s19 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.856868 642 storage/allocator_test.go:5373 s0 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.858286 642 storage/allocator_test.go:5373 s8 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.859099 642 storage/allocator_test.go:5373 s8 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.859941 642 storage/allocator_test.go:5373 s19 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:30.863274 642 storage/allocator_test.go:5373 s8 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.869513 642 storage/allocator_test.go:5373 s2 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.870731 642 storage/allocator_test.go:5373 s4 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.873451 642 storage/allocator_test.go:5373 s17 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.875330 642 storage/allocator_test.go:5373 s6 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.877710 642 storage/allocator_test.go:5373 s15 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.878934 642 storage/allocator_test.go:5373 s15 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.879764 642 storage/allocator_test.go:5373 s14 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.881334 642 storage/allocator_test.go:5373 s15 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.885917 642 storage/allocator_test.go:5373 s15 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.886726 642 storage/allocator_test.go:5373 s14 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.887740 642 storage/allocator_test.go:5373 s12 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.894880 642 storage/allocator_test.go:5373 s0 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.895701 642 storage/allocator_test.go:5373 s2 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.896268 642 storage/allocator_test.go:5373 s9 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.896854 642 storage/allocator_test.go:5373 s1 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.898144 642 storage/allocator_test.go:5373 s9 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.898878 642 storage/allocator_test.go:5373 s9 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.899496 642 storage/allocator_test.go:5373 s10 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.900142 642 storage/allocator_test.go:5373 s9 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.901359 642 storage/allocator_test.go:5373 s9 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.902157 642 storage/allocator_test.go:5373 s1 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.902826 642 storage/allocator_test.go:5373 s1 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.904596 642 storage/allocator_test.go:5373 s9 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.905526 642 storage/allocator_test.go:5373 s1 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.906061 642 storage/allocator_test.go:5373 s10 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:30.906553 642 storage/allocator_test.go:5373 s1 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.907614 642 storage/allocator_test.go:5373 s9 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.908392 642 storage/allocator_test.go:5373 s9 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:30.908917 642 storage/allocator_test.go:5373 s2 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.909429 642 storage/allocator_test.go:5373 s2 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.910166 642 storage/allocator_test.go:5373 s2 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.910793 642 storage/allocator_test.go:5373 s9 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.911517 642 storage/allocator_test.go:5373 s2 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.914206 642 storage/allocator_test.go:5373 s16 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.916877 642 storage/allocator_test.go:5373 s6 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.921254 642 storage/allocator_test.go:5373 s12 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:30.922806 642 storage/allocator_test.go:5373 s8 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.924487 642 storage/allocator_test.go:5373 s7 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.926104 642 storage/allocator_test.go:5373 s17 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.927130 642 storage/allocator_test.go:5373 s12 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.927710 642 storage/allocator_test.go:5373 s11 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.928339 642 storage/allocator_test.go:5373 s5 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.930320 642 storage/allocator_test.go:5373 s4 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.930876 642 storage/allocator_test.go:5373 s19 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.932551 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.933331 642 storage/allocator_test.go:5373 s19 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.934094 642 storage/allocator_test.go:5373 s6 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.934614 642 storage/allocator_test.go:5373 s4 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.935192 642 storage/allocator_test.go:5373 s6 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.937169 642 storage/allocator_test.go:5373 s13 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.937892 642 storage/allocator_test.go:5373 s14 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.940139 642 storage/allocator_test.go:5373 s14 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.941278 642 storage/allocator_test.go:5373 s6 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.942397 642 storage/allocator_test.go:5373 s19 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:30.943347 642 storage/allocator_test.go:5373 s19 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.944147 642 storage/allocator_test.go:5373 s0 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.947991 642 storage/allocator_test.go:5373 s0 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.948935 642 storage/allocator_test.go:5373 s11 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:30.952539 642 storage/allocator_test.go:5373 s19 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:30.954169 642 storage/allocator_test.go:5373 s0 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:30.956109 642 storage/allocator_test.go:5373 s10 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.956683 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.957496 642 storage/allocator_test.go:5373 s16 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.958051 642 storage/allocator_test.go:5373 s0 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.959390 642 storage/allocator_test.go:5373 s18 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.961224 642 storage/allocator_test.go:5373 s18 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.962085 642 storage/allocator_test.go:5373 s0 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.963359 642 storage/allocator_test.go:5373 s16 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.964186 642 storage/allocator_test.go:5373 s0 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.965437 642 storage/allocator_test.go:5373 s16 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.967034 642 storage/allocator_test.go:5373 s0 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.968229 642 storage/allocator_test.go:5373 s10 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.969383 642 storage/allocator_test.go:5373 s0 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.970179 642 storage/allocator_test.go:5373 s10 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:30.971378 642 storage/allocator_test.go:5373 s16 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.972169 642 storage/allocator_test.go:5373 s0 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.972949 642 storage/allocator_test.go:5373 s0 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.974139 642 storage/allocator_test.go:5373 s16 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.974936 642 storage/allocator_test.go:5373 s16 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.976134 642 storage/allocator_test.go:5373 s18 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.976944 642 storage/allocator_test.go:5373 s18 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.978116 642 storage/allocator_test.go:5373 s16 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.979722 642 storage/allocator_test.go:5373 s15 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:30.980894 642 storage/allocator_test.go:5373 s16 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:30.982647 642 storage/allocator_test.go:5373 s18 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:30.986545 642 storage/allocator_test.go:5373 s18 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:30.987721 642 storage/allocator_test.go:5373 s16 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:30.988906 642 storage/allocator_test.go:5373 s4 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:30.990894 642 storage/allocator_test.go:5373 s2 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:30.992100 642 storage/allocator_test.go:5373 s1 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:30.994045 642 storage/allocator_test.go:5373 s8 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:30.995248 642 storage/allocator_test.go:5373 s16 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:30.997214 642 storage/allocator_test.go:5373 s8 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:30.999924 642 storage/allocator_test.go:5373 s5 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.001902 642 storage/allocator_test.go:5373 s11 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.003147 642 storage/allocator_test.go:5373 s8 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.007128 642 storage/allocator_test.go:5373 s11 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.007997 642 storage/allocator_test.go:5373 s4 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.012828 642 storage/allocator_test.go:5373 s1 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.013669 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.014861 642 storage/allocator_test.go:5373 s8 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.022451 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.024527 642 storage/allocator_test.go:5373 s13 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.026479 642 storage/allocator_test.go:5373 s5 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.027692 642 storage/allocator_test.go:5373 s3 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.030123 642 storage/allocator_test.go:5373 s14 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.030908 642 storage/allocator_test.go:5373 s6 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.032139 642 storage/allocator_test.go:5373 s9 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.032962 642 storage/allocator_test.go:5373 s13 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.033741 642 storage/allocator_test.go:5373 s12 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.035551 642 storage/allocator_test.go:5373 s14 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.036785 642 storage/allocator_test.go:5373 s3 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.037527 642 storage/allocator_test.go:5373 s13 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.038951 642 storage/allocator_test.go:5373 s7 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.039510 642 storage/allocator_test.go:5373 s14 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.040237 642 storage/allocator_test.go:5373 s14 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.040760 642 storage/allocator_test.go:5373 s14 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.041476 642 storage/allocator_test.go:5373 s14 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.044889 642 storage/allocator_test.go:5373 s9 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.048401 642 storage/allocator_test.go:5373 s12 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.050501 642 storage/allocator_test.go:5373 s13 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.055627 642 storage/allocator_test.go:5373 s4 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.058152 642 storage/allocator_test.go:5373 s15 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.064683 642 storage/allocator_test.go:5373 s8 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.065826 642 storage/allocator_test.go:5373 s15 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.066637 642 storage/allocator_test.go:5373 s15 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.067206 642 storage/allocator_test.go:5373 s4 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.067777 642 storage/allocator_test.go:5373 s19 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.068589 642 storage/allocator_test.go:5373 s19 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.069877 642 storage/allocator_test.go:5373 s17 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.070439 642 storage/allocator_test.go:5373 s4 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.071225 642 storage/allocator_test.go:5373 s12 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.071990 642 storage/allocator_test.go:5373 s4 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.072795 642 storage/allocator_test.go:5373 s8 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.073833 642 storage/allocator_test.go:5373 s4 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.074648 642 storage/allocator_test.go:5373 s8 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.075203 642 storage/allocator_test.go:5373 s15 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.075788 642 storage/allocator_test.go:5373 s1 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.076572 642 storage/allocator_test.go:5373 s10 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.077842 642 storage/allocator_test.go:5373 s19 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.078346 642 storage/allocator_test.go:5373 s19 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.079123 642 storage/allocator_test.go:5373 s8 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.079882 642 storage/allocator_test.go:5373 s19 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.080624 642 storage/allocator_test.go:5373 s15 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.082744 642 storage/allocator_test.go:5373 s0 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.084788 642 storage/allocator_test.go:5373 s10 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.085869 642 storage/allocator_test.go:5373 s3 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.086977 642 storage/allocator_test.go:5373 s15 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.088983 642 storage/allocator_test.go:5373 s7 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.092549 642 storage/allocator_test.go:5373 s19 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.093825 642 storage/allocator_test.go:5373 s3 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.095819 642 storage/allocator_test.go:5373 s6 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.097776 642 storage/allocator_test.go:5373 s7 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.100192 642 storage/allocator_test.go:5373 s9 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.102765 642 storage/allocator_test.go:5373 s18 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.105306 642 storage/allocator_test.go:5373 s10 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.107289 642 storage/allocator_test.go:5373 s6 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.108532 642 storage/allocator_test.go:5373 s18 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.109326 642 storage/allocator_test.go:5373 s0 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.110170 642 storage/allocator_test.go:5373 s7 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.111032 642 storage/allocator_test.go:5373 s18 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.112246 642 storage/allocator_test.go:5373 s3 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.113488 642 storage/allocator_test.go:5373 s6 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.115145 642 storage/allocator_test.go:5373 s18 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.117201 642 storage/allocator_test.go:5373 s0 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.119215 642 storage/allocator_test.go:5373 s7 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.120424 642 storage/allocator_test.go:5373 s0 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.121230 642 storage/allocator_test.go:5373 s3 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.122033 642 storage/allocator_test.go:5373 s18 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.122843 642 storage/allocator_test.go:5373 s18 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.124060 642 storage/allocator_test.go:5373 s7 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.125684 642 storage/allocator_test.go:5373 s7 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.128344 642 storage/allocator_test.go:5373 s6 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.131799 642 storage/allocator_test.go:5373 s9 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.136833 642 storage/allocator_test.go:5373 s1 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.138105 642 storage/allocator_test.go:5373 s8 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.139392 642 storage/allocator_test.go:5373 s8 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.140689 642 storage/allocator_test.go:5373 s14 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.142685 642 storage/allocator_test.go:5373 s17 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.143916 642 storage/allocator_test.go:5373 s4 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.145552 642 storage/allocator_test.go:5373 s1 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.147150 642 storage/allocator_test.go:5373 s1 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.147969 642 storage/allocator_test.go:5373 s5 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.149214 642 storage/allocator_test.go:5373 s1 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.150026 642 storage/allocator_test.go:5373 s1 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.154378 642 storage/allocator_test.go:5373 s1 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.156003 642 storage/allocator_test.go:5373 s14 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.156825 642 storage/allocator_test.go:5373 s1 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.163515 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.164104 642 storage/allocator_test.go:5373 s5 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.165689 642 storage/allocator_test.go:5373 s12 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.166202 642 storage/allocator_test.go:5373 s10 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.166766 642 storage/allocator_test.go:5373 s4 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.167822 642 storage/allocator_test.go:5373 s3 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.168885 642 storage/allocator_test.go:5373 s12 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.169422 642 storage/allocator_test.go:5373 s10 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.170445 642 storage/allocator_test.go:5373 s9 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.171265 642 storage/allocator_test.go:5373 s16 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.171818 642 storage/allocator_test.go:5373 s17 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.173336 642 storage/allocator_test.go:5373 s12 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.173923 642 storage/allocator_test.go:5373 s10 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.174468 642 storage/allocator_test.go:5373 s17 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.175522 642 storage/allocator_test.go:5373 s3 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.176532 642 storage/allocator_test.go:5373 s3 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.177102 642 storage/allocator_test.go:5373 s4 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.178273 642 storage/allocator_test.go:5373 s9 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.179429 642 storage/allocator_test.go:5373 s10 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.180620 642 storage/allocator_test.go:5373 s19 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.184461 642 storage/allocator_test.go:5373 s9 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.185649 642 storage/allocator_test.go:5373 s0 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.186820 642 storage/allocator_test.go:5373 s0 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.189455 642 storage/allocator_test.go:5373 s8 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.197865 642 storage/allocator_test.go:5373 s8 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.199430 642 storage/allocator_test.go:5373 s17 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.205222 642 storage/allocator_test.go:5373 s17 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.207563 642 storage/allocator_test.go:5373 s8 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.208768 642 storage/allocator_test.go:5373 s14 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.209551 642 storage/allocator_test.go:5373 s8 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.210353 642 storage/allocator_test.go:5373 s11 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.211566 642 storage/allocator_test.go:5373 s8 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.212756 642 storage/allocator_test.go:5373 s8 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.213508 642 storage/allocator_test.go:5373 s8 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.214694 642 storage/allocator_test.go:5373 s11 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.216641 642 storage/allocator_test.go:5373 s15 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.217424 642 storage/allocator_test.go:5373 s15 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.218227 642 storage/allocator_test.go:5373 s6 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.218996 642 storage/allocator_test.go:5373 s6 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.220207 642 storage/allocator_test.go:5373 s13 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.221374 642 storage/allocator_test.go:5373 s11 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.222017 642 storage/allocator_test.go:5373 s15 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.222682 642 storage/allocator_test.go:5373 s11 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.223685 642 storage/allocator_test.go:5373 s8 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.224813 642 storage/allocator_test.go:5373 s15 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.226360 642 storage/allocator_test.go:5373 s11 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.227170 642 storage/allocator_test.go:5373 s11 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.229179 642 storage/allocator_test.go:5373 s8 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.229994 642 storage/allocator_test.go:5373 s11 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.230814 642 storage/allocator_test.go:5373 s6 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.231637 642 storage/allocator_test.go:5373 s8 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.233330 642 storage/allocator_test.go:5373 s8 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.235368 642 storage/allocator_test.go:5373 s11 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.236595 642 storage/allocator_test.go:5373 s15 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.237796 642 storage/allocator_test.go:5373 s15 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.239812 642 storage/allocator_test.go:5373 s5 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.241775 642 storage/allocator_test.go:5373 s11 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.243029 642 storage/allocator_test.go:5373 s5 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.245132 642 storage/allocator_test.go:5373 s14 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.248704 642 storage/allocator_test.go:5373 s5 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.250703 642 storage/allocator_test.go:5373 s1 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.252545 642 storage/allocator_test.go:5373 s16 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.254324 642 storage/allocator_test.go:5373 s18 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.257594 642 storage/allocator_test.go:5373 s1 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.258785 642 storage/allocator_test.go:5373 s19 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.260390 642 storage/allocator_test.go:5373 s1 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.261563 642 storage/allocator_test.go:5373 s1 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.262353 642 storage/allocator_test.go:5373 s1 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.263157 642 storage/allocator_test.go:5373 s1 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.265686 642 storage/allocator_test.go:5373 s12 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.268465 642 storage/allocator_test.go:5373 s19 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.269733 642 storage/allocator_test.go:5373 s19 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.271309 642 storage/allocator_test.go:5373 s5 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.272487 642 storage/allocator_test.go:5373 s18 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.273277 642 storage/allocator_test.go:5373 s1 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.274074 642 storage/allocator_test.go:5373 s12 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.276531 642 storage/allocator_test.go:5373 s1 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.280872 642 storage/allocator_test.go:5373 s19 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.282429 642 storage/allocator_test.go:5373 s4 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.284896 642 storage/allocator_test.go:5373 s12 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.286361 642 storage/allocator_test.go:5373 s7 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.287359 642 storage/allocator_test.go:5373 s18 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.288177 642 storage/allocator_test.go:5373 s16 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.291740 642 storage/allocator_test.go:5373 s16 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.293367 642 storage/allocator_test.go:5373 s6 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.295347 642 storage/allocator_test.go:5373 s10 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.297205 642 storage/allocator_test.go:5373 s13 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.297758 642 storage/allocator_test.go:5373 s6 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.298748 642 storage/allocator_test.go:5373 s6 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.299818 642 storage/allocator_test.go:5373 s6 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.300342 642 storage/allocator_test.go:5373 s4 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.304744 642 storage/allocator_test.go:5373 s0 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.305958 642 storage/allocator_test.go:5373 s7 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.306480 642 storage/allocator_test.go:5373 s7 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.307245 642 storage/allocator_test.go:5373 s18 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.307796 642 storage/allocator_test.go:5373 s3 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.308571 642 storage/allocator_test.go:5373 s3 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.311248 642 storage/allocator_test.go:5373 s14 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.312212 642 storage/allocator_test.go:5373 s14 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.312749 642 storage/allocator_test.go:5373 s3 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.313237 642 storage/allocator_test.go:5373 s14 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.314425 642 storage/allocator_test.go:5373 s14 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.315969 642 storage/allocator_test.go:5373 s7 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.321366 642 storage/allocator_test.go:5373 s14 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.323978 642 storage/allocator_test.go:5373 s14 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.324871 642 storage/allocator_test.go:5373 s18 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.325697 642 storage/allocator_test.go:5373 s4 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.327156 642 storage/allocator_test.go:5373 s5 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.328211 642 storage/allocator_test.go:5373 s18 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.330051 642 storage/allocator_test.go:5373 s18 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.331493 642 storage/allocator_test.go:5373 s9 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.334655 642 storage/allocator_test.go:5373 s4 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.335622 642 storage/allocator_test.go:5373 s9 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.336872 642 storage/allocator_test.go:5373 s9 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.337876 642 storage/allocator_test.go:5373 s4 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.338426 642 storage/allocator_test.go:5373 s5 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.339101 642 storage/allocator_test.go:5373 s5 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.340381 642 storage/allocator_test.go:5373 s10 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.340989 642 storage/allocator_test.go:5373 s9 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.341595 642 storage/allocator_test.go:5373 s9 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.342864 642 storage/allocator_test.go:5373 s15 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.344067 642 storage/allocator_test.go:5373 s13 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.345492 642 storage/allocator_test.go:5373 s5 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.346940 642 storage/allocator_test.go:5373 s9 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.348054 642 storage/allocator_test.go:5373 s5 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.348624 642 storage/allocator_test.go:5373 s5 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.350274 642 storage/allocator_test.go:5373 s9 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.351096 642 storage/allocator_test.go:5373 s0 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.351927 642 storage/allocator_test.go:5373 s4 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.353671 642 storage/allocator_test.go:5373 s5 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.355265 642 storage/allocator_test.go:5373 s13 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.357178 642 storage/allocator_test.go:5373 s13 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.359241 642 storage/allocator_test.go:5373 s12 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.360484 642 storage/allocator_test.go:5373 s12 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.361729 642 storage/allocator_test.go:5373 s5 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.364460 642 storage/allocator_test.go:5373 s2 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.365675 642 storage/allocator_test.go:5373 s1 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.366886 642 storage/allocator_test.go:5373 s1 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.369595 642 storage/allocator_test.go:5373 s7 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.372287 642 storage/allocator_test.go:5373 s3 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.376608 642 storage/allocator_test.go:5373 s2 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.378065 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.379381 642 storage/allocator_test.go:5373 s11 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.379958 642 storage/allocator_test.go:5373 s8 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.381216 642 storage/allocator_test.go:5373 s19 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.381750 642 storage/allocator_test.go:5373 s11 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.382516 642 storage/allocator_test.go:5373 s11 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.384617 642 storage/allocator_test.go:5373 s8 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.385193 642 storage/allocator_test.go:5373 s6 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.385755 642 storage/allocator_test.go:5373 s2 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.387308 642 storage/allocator_test.go:5373 s1 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.389157 642 storage/allocator_test.go:5373 s17 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.389820 642 storage/allocator_test.go:5373 s1 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.391170 642 storage/allocator_test.go:5373 s11 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.391816 642 storage/allocator_test.go:5373 s11 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.392625 642 storage/allocator_test.go:5373 s11 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.393165 642 storage/allocator_test.go:5373 s6 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.393695 642 storage/allocator_test.go:5373 s6 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.394243 642 storage/allocator_test.go:5373 s11 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.395781 642 storage/allocator_test.go:5373 s1 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.398493 642 storage/allocator_test.go:5373 s8 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.399342 642 storage/allocator_test.go:5373 s7 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.401800 642 storage/allocator_test.go:5373 s11 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.403923 642 storage/allocator_test.go:5373 s18 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.404751 642 storage/allocator_test.go:5373 s3 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.405600 642 storage/allocator_test.go:5373 s4 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.409051 642 storage/allocator_test.go:5373 s14 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.410279 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.414392 642 storage/allocator_test.go:5373 s16 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.414970 642 storage/allocator_test.go:5373 s4 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.415511 642 storage/allocator_test.go:5373 s14 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.416091 642 storage/allocator_test.go:5373 s10 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.416841 642 storage/allocator_test.go:5373 s12 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.417930 642 storage/allocator_test.go:5373 s14 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.419821 642 storage/allocator_test.go:5373 s0 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.420368 642 storage/allocator_test.go:5373 s10 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.420996 642 storage/allocator_test.go:5373 s12 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.427291 642 storage/allocator_test.go:5373 s19 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.428116 642 storage/allocator_test.go:5373 s0 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.428915 642 storage/allocator_test.go:5373 s19 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.429699 642 storage/allocator_test.go:5373 s19 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.432028 642 storage/allocator_test.go:5373 s13 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.432824 642 storage/allocator_test.go:5373 s0 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.434371 642 storage/allocator_test.go:5373 s0 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.437876 642 storage/allocator_test.go:5373 s19 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.438657 642 storage/allocator_test.go:5373 s13 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.439870 642 storage/allocator_test.go:5373 s0 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.442236 642 storage/allocator_test.go:5373 s13 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.444557 642 storage/allocator_test.go:5373 s13 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.446211 642 storage/allocator_test.go:5373 s19 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.450673 642 storage/allocator_test.go:5373 s19 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.451974 642 storage/allocator_test.go:5373 s7 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.453967 642 storage/allocator_test.go:5373 s3 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.455185 642 storage/allocator_test.go:5373 s7 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.457411 642 storage/allocator_test.go:5373 s3 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.458641 642 storage/allocator_test.go:5373 s2 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.459877 642 storage/allocator_test.go:5373 s4 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.461834 642 storage/allocator_test.go:5373 s9 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.464688 642 storage/allocator_test.go:5373 s2 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.468749 642 storage/allocator_test.go:5373 s3 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.469268 642 storage/allocator_test.go:5373 s10 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.470975 642 storage/allocator_test.go:5373 s2 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.472320 642 storage/allocator_test.go:5373 s14 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.473080 642 storage/allocator_test.go:5373 s15 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.473786 642 storage/allocator_test.go:5373 s3 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.475038 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.476271 642 storage/allocator_test.go:5373 s5 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.477084 642 storage/allocator_test.go:5373 s5 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.477927 642 storage/allocator_test.go:5373 s14 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.478734 642 storage/allocator_test.go:5373 s14 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.481489 642 storage/allocator_test.go:5373 s3 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.483497 642 storage/allocator_test.go:5373 s14 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.484321 642 storage/allocator_test.go:5373 s9 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.485159 642 storage/allocator_test.go:5373 s6 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.486770 642 storage/allocator_test.go:5373 s10 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.488010 642 storage/allocator_test.go:5373 s14 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.488842 642 storage/allocator_test.go:5373 s14 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.489664 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.490921 642 storage/allocator_test.go:5373 s3 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.498173 642 storage/allocator_test.go:5373 s1 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.499430 642 storage/allocator_test.go:5373 s1 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.505058 642 storage/allocator_test.go:5373 s1 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.507104 642 storage/allocator_test.go:5373 s0 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.507943 642 storage/allocator_test.go:5373 s7 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.508792 642 storage/allocator_test.go:5373 s18 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.509663 642 storage/allocator_test.go:5373 s16 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.510509 642 storage/allocator_test.go:5373 s18 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.512139 642 storage/allocator_test.go:5373 s1 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.512980 642 storage/allocator_test.go:5373 s16 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.514960 642 storage/allocator_test.go:5373 s16 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.515813 642 storage/allocator_test.go:5373 s11 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.517820 642 storage/allocator_test.go:5373 s18 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.519456 642 storage/allocator_test.go:5373 s12 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.520265 642 storage/allocator_test.go:5373 s13 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.521083 642 storage/allocator_test.go:5373 s7 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.521893 642 storage/allocator_test.go:5373 s7 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.522697 642 storage/allocator_test.go:5373 s1 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.524084 642 storage/allocator_test.go:5373 s1 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.524642 642 storage/allocator_test.go:5373 s1 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.526590 642 storage/allocator_test.go:5373 s7 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.527432 642 storage/allocator_test.go:5373 s12 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.529393 642 storage/allocator_test.go:5373 s12 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.531883 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.533121 642 storage/allocator_test.go:5373 s11 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.535123 642 storage/allocator_test.go:5373 s18 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.536055 642 storage/allocator_test.go:5373 s7 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.538803 642 storage/allocator_test.go:5373 s18 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.540071 642 storage/allocator_test.go:5373 s18 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.543372 642 storage/allocator_test.go:5373 s6 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.544560 642 storage/allocator_test.go:5373 s6 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.548745 642 storage/allocator_test.go:5373 s15 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.550317 642 storage/allocator_test.go:5373 s6 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.551103 642 storage/allocator_test.go:5373 s9 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.551624 642 storage/allocator_test.go:5373 s10 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.552823 642 storage/allocator_test.go:5373 s9 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.554031 642 storage/allocator_test.go:5373 s15 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.554521 642 storage/allocator_test.go:5373 s4 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.555025 642 storage/allocator_test.go:5373 s6 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.555527 642 storage/allocator_test.go:5373 s6 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.556273 642 storage/allocator_test.go:5373 s15 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.557039 642 storage/allocator_test.go:5373 s5 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.563036 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.563865 642 storage/allocator_test.go:5373 s10 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.566247 642 storage/allocator_test.go:5373 s2 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.567026 642 storage/allocator_test.go:5373 s14 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.568253 642 storage/allocator_test.go:5373 s10 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.571426 642 storage/allocator_test.go:5373 s11 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.572627 642 storage/allocator_test.go:5373 s19 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.573801 642 storage/allocator_test.go:5373 s19 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.574590 642 storage/allocator_test.go:5373 s17 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.576984 642 storage/allocator_test.go:5373 s19 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.577813 642 storage/allocator_test.go:5373 s19 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.578977 642 storage/allocator_test.go:5373 s3 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.582112 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.583330 642 storage/allocator_test.go:5373 s5 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.584533 642 storage/allocator_test.go:5373 s10 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.585775 642 storage/allocator_test.go:5373 s5 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.589328 642 storage/allocator_test.go:5373 s10 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.591666 642 storage/allocator_test.go:5373 s10 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.592820 642 storage/allocator_test.go:5373 s19 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.593879 642 storage/allocator_test.go:5373 s14 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.594739 642 storage/allocator_test.go:5373 s19 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.596766 642 storage/allocator_test.go:5373 s9 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.597729 642 storage/allocator_test.go:5373 s19 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.599207 642 storage/allocator_test.go:5373 s4 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.600068 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.601387 642 storage/allocator_test.go:5373 s19 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.602989 642 storage/allocator_test.go:5373 s16 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.603509 642 storage/allocator_test.go:5373 s6 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.604078 642 storage/allocator_test.go:5373 s0 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.604836 642 storage/allocator_test.go:5373 s9 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.605637 642 storage/allocator_test.go:5373 s6 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.606648 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.607285 642 storage/allocator_test.go:5373 s12 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.608289 642 storage/allocator_test.go:5373 s16 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.609339 642 storage/allocator_test.go:5373 s12 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.609875 642 storage/allocator_test.go:5373 s9 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.610386 642 storage/allocator_test.go:5373 s8 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.611461 642 storage/allocator_test.go:5373 s6 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.612235 642 storage/allocator_test.go:5373 s6 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.612982 642 storage/allocator_test.go:5373 s16 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.613957 642 storage/allocator_test.go:5373 s12 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.614709 642 storage/allocator_test.go:5373 s6 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.615783 642 storage/allocator_test.go:5373 s16 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.616418 642 storage/allocator_test.go:5373 s0 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.617449 642 storage/allocator_test.go:5373 s6 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.618456 642 storage/allocator_test.go:5373 s15 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.619011 642 storage/allocator_test.go:5373 s0 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.619549 642 storage/allocator_test.go:5373 s6 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.620695 642 storage/allocator_test.go:5373 s6 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.622022 642 storage/allocator_test.go:5373 s15 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.624419 642 storage/allocator_test.go:5373 s2 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.627175 642 storage/allocator_test.go:5373 s2 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.630033 642 storage/allocator_test.go:5373 s5 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.631136 642 storage/allocator_test.go:5373 s7 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.632890 642 storage/allocator_test.go:5373 s1 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.634620 642 storage/allocator_test.go:5373 s11 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.635888 642 storage/allocator_test.go:5373 s7 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.636709 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.637994 642 storage/allocator_test.go:5373 s11 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.638827 642 storage/allocator_test.go:5373 s11 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.639700 642 storage/allocator_test.go:5373 s7 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.640304 642 storage/allocator_test.go:5373 s5 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.641448 642 storage/allocator_test.go:5373 s1 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.642567 642 storage/allocator_test.go:5373 s17 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.643154 642 storage/allocator_test.go:5373 s18 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.644221 642 storage/allocator_test.go:5373 s1 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.644811 642 storage/allocator_test.go:5373 s1 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.646619 642 storage/allocator_test.go:5373 s14 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.647662 642 storage/allocator_test.go:5373 s1 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.648460 642 storage/allocator_test.go:5373 s3 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.648997 642 storage/allocator_test.go:5373 s1 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.650006 642 storage/allocator_test.go:5373 s11 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.651036 642 storage/allocator_test.go:5373 s14 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.651611 642 storage/allocator_test.go:5373 s7 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.652638 642 storage/allocator_test.go:5373 s17 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.653171 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.655643 642 storage/allocator_test.go:5373 s17 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.657000 642 storage/allocator_test.go:5373 s5 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.658331 642 storage/allocator_test.go:5373 s17 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.659137 642 storage/allocator_test.go:5373 s17 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.663442 642 storage/allocator_test.go:5373 s14 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.667108 642 storage/allocator_test.go:5373 s3 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.668364 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.671210 642 storage/allocator_test.go:5373 s16 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.672882 642 storage/allocator_test.go:5373 s3 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.673738 642 storage/allocator_test.go:5373 s8 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.675339 642 storage/allocator_test.go:5373 s16 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.676174 642 storage/allocator_test.go:5373 s15 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.677015 642 storage/allocator_test.go:5373 s15 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.679004 642 storage/allocator_test.go:5373 s3 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.680640 642 storage/allocator_test.go:5373 s4 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.682239 642 storage/allocator_test.go:5373 s8 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.690322 642 storage/allocator_test.go:5373 s4 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.691173 642 storage/allocator_test.go:5373 s12 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.692382 642 storage/allocator_test.go:5373 s4 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.693986 642 storage/allocator_test.go:5373 s4 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.694808 642 storage/allocator_test.go:5373 s9 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.696838 642 storage/allocator_test.go:5373 s19 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.699244 642 storage/allocator_test.go:5373 s19 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.700058 642 storage/allocator_test.go:5373 s12 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.701655 642 storage/allocator_test.go:5373 s19 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.702450 642 storage/allocator_test.go:5373 s12 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.703722 642 storage/allocator_test.go:5373 s12 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.705306 642 storage/allocator_test.go:5373 s19 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.706097 642 storage/allocator_test.go:5373 s19 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.708100 642 storage/allocator_test.go:5373 s2 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.710489 642 storage/allocator_test.go:5373 s19 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.711337 642 storage/allocator_test.go:5373 s4 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.712986 642 storage/allocator_test.go:5373 s8 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.714882 642 storage/allocator_test.go:5373 s19 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.717141 642 storage/allocator_test.go:5373 s1 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.719026 642 storage/allocator_test.go:5373 s9 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.719835 642 storage/allocator_test.go:5373 s0 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.720624 642 storage/allocator_test.go:5373 s1 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.722427 642 storage/allocator_test.go:5373 s3 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.723252 642 storage/allocator_test.go:5373 s0 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.725034 642 storage/allocator_test.go:5373 s11 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.725844 642 storage/allocator_test.go:5373 s11 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.727206 642 storage/allocator_test.go:5373 s15 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.729455 642 storage/allocator_test.go:5373 s0 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.730405 642 storage/allocator_test.go:5373 s18 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.731649 642 storage/allocator_test.go:5373 s1 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.732772 642 storage/allocator_test.go:5373 s15 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.734250 642 storage/allocator_test.go:5373 s1 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.734778 642 storage/allocator_test.go:5373 s1 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.736022 642 storage/allocator_test.go:5373 s11 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.736803 642 storage/allocator_test.go:5373 s11 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.737840 642 storage/allocator_test.go:5373 s11 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.738613 642 storage/allocator_test.go:5373 s11 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.739482 642 storage/allocator_test.go:5373 s0 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.740763 642 storage/allocator_test.go:5373 s1 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.742767 642 storage/allocator_test.go:5373 s5 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.743606 642 storage/allocator_test.go:5373 s7 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.745586 642 storage/allocator_test.go:5373 s16 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.746776 642 storage/allocator_test.go:5373 s15 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.749529 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.751608 642 storage/allocator_test.go:5373 s9 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.754586 642 storage/allocator_test.go:5373 s11 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.757178 642 storage/allocator_test.go:5373 s15 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.760703 642 storage/allocator_test.go:5373 s16 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.764755 642 storage/allocator_test.go:5373 s8 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.766749 642 storage/allocator_test.go:5373 s8 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.767969 642 storage/allocator_test.go:5373 s6 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.768771 642 storage/allocator_test.go:5373 s8 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.771543 642 storage/allocator_test.go:5373 s2 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.773525 642 storage/allocator_test.go:5373 s8 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.775516 642 storage/allocator_test.go:5373 s13 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.776333 642 storage/allocator_test.go:5373 s13 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.777538 642 storage/allocator_test.go:5373 s8 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.778346 642 storage/allocator_test.go:5373 s6 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.779151 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.779942 642 storage/allocator_test.go:5373 s13 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.781911 642 storage/allocator_test.go:5373 s13 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.783103 642 storage/allocator_test.go:5373 s14 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.784315 642 storage/allocator_test.go:5373 s8 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.785501 642 storage/allocator_test.go:5373 s4 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.785986 642 storage/allocator_test.go:5373 s6 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.786929 642 storage/allocator_test.go:5373 s10 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.787463 642 storage/allocator_test.go:5373 s8 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.788263 642 storage/allocator_test.go:5373 s14 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.789496 642 storage/allocator_test.go:5373 s6 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.791271 642 storage/allocator_test.go:5373 s8 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.793336 642 storage/allocator_test.go:5373 s8 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.795334 642 storage/allocator_test.go:5373 s13 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.797326 642 storage/allocator_test.go:5373 s17 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.801861 642 storage/allocator_test.go:5373 s2 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.802755 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.804607 642 storage/allocator_test.go:5373 s3 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.805715 642 storage/allocator_test.go:5373 s2 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.807320 642 storage/allocator_test.go:5373 s5 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.808134 642 storage/allocator_test.go:5373 s18 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.809200 642 storage/allocator_test.go:5373 s16 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.809941 642 storage/allocator_test.go:5373 s16 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.810710 642 storage/allocator_test.go:5373 s2 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.817429 642 storage/allocator_test.go:5373 s0 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.818219 642 storage/allocator_test.go:5373 s0 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.820912 642 storage/allocator_test.go:5373 s7 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.822083 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.823265 642 storage/allocator_test.go:5373 s7 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.824033 642 storage/allocator_test.go:5373 s0 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.824803 642 storage/allocator_test.go:5373 s7 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.825360 642 storage/allocator_test.go:5373 s7 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.826055 642 storage/allocator_test.go:5373 s7 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.826555 642 storage/allocator_test.go:5373 s0 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.827956 642 storage/allocator_test.go:5373 s0 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.828671 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.830909 642 storage/allocator_test.go:5373 s7 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.831932 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.832641 642 storage/allocator_test.go:5373 s9 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.833117 642 storage/allocator_test.go:5373 s9 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.833589 642 storage/allocator_test.go:5373 s7 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.834051 642 storage/allocator_test.go:5373 s7 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.834806 642 storage/allocator_test.go:5373 s0 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.836143 642 storage/allocator_test.go:5373 s7 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.839678 642 storage/allocator_test.go:5373 s7 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.840869 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.845903 642 storage/allocator_test.go:5373 s2 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.847886 642 storage/allocator_test.go:5373 s19 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.849829 642 storage/allocator_test.go:5373 s5 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.851081 642 storage/allocator_test.go:5373 s6 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.852304 642 storage/allocator_test.go:5373 s2 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.853546 642 storage/allocator_test.go:5373 s13 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.855504 642 storage/allocator_test.go:5373 s1 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.856724 642 storage/allocator_test.go:5373 s13 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.858726 642 storage/allocator_test.go:5373 s14 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.861929 642 storage/allocator_test.go:5373 s1 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.862748 642 storage/allocator_test.go:5373 s12 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.867379 642 storage/allocator_test.go:5373 s14 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.868044 642 storage/allocator_test.go:5373 s13 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.869994 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.870760 642 storage/allocator_test.go:5373 s1 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.874456 642 storage/allocator_test.go:5373 s12 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.875002 642 storage/allocator_test.go:5373 s1 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.879354 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.880154 642 storage/allocator_test.go:5373 s17 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.887757 642 storage/allocator_test.go:5373 s18 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.888531 642 storage/allocator_test.go:5373 s18 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.889034 642 storage/allocator_test.go:5373 s10 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.889545 642 storage/allocator_test.go:5373 s15 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.890482 642 storage/allocator_test.go:5373 s15 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.891290 642 storage/allocator_test.go:5373 s18 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.892708 642 storage/allocator_test.go:5373 s18 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.893213 642 storage/allocator_test.go:5373 s15 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.895111 642 storage/allocator_test.go:5373 s18 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.895632 642 storage/allocator_test.go:5373 s10 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.896140 642 storage/allocator_test.go:5373 s15 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.897107 642 storage/allocator_test.go:5373 s19 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.897859 642 storage/allocator_test.go:5373 s19 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.899643 642 storage/allocator_test.go:5373 s10 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.900379 642 storage/allocator_test.go:5373 s15 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.903315 642 storage/allocator_test.go:5373 s18 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.904447 642 storage/allocator_test.go:5373 s19 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.905225 642 storage/allocator_test.go:5373 s15 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:31.906646 642 storage/allocator_test.go:5373 s18 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.907495 642 storage/allocator_test.go:5373 s10 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.908465 642 storage/allocator_test.go:5373 s15 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.909601 642 storage/allocator_test.go:5373 s17 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.912975 642 storage/allocator_test.go:5373 s4 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.921063 642 storage/allocator_test.go:5373 s1 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.922149 642 storage/allocator_test.go:5373 s5 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.923738 642 storage/allocator_test.go:5373 s1 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.924813 642 storage/allocator_test.go:5373 s13 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.925362 642 storage/allocator_test.go:5373 s2 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.929897 642 storage/allocator_test.go:5373 s6 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.930459 642 storage/allocator_test.go:5373 s6 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.932272 642 storage/allocator_test.go:5373 s9 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.933332 642 storage/allocator_test.go:5373 s14 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.933897 642 storage/allocator_test.go:5373 s12 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.934827 642 storage/allocator_test.go:5373 s17 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.935694 642 storage/allocator_test.go:5373 s3 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.936322 642 storage/allocator_test.go:5373 s3 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.937212 642 storage/allocator_test.go:5373 s5 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.937857 642 storage/allocator_test.go:5373 s8 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.938488 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.939337 642 storage/allocator_test.go:5373 s12 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.941004 642 storage/allocator_test.go:5373 s6 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.942006 642 storage/allocator_test.go:5373 s12 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.944041 642 storage/allocator_test.go:5373 s14 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.944975 642 storage/allocator_test.go:5373 s12 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.945822 642 storage/allocator_test.go:5373 s9 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.946378 642 storage/allocator_test.go:5373 s12 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.947208 642 storage/allocator_test.go:5373 s12 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.947771 642 storage/allocator_test.go:5373 s17 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.948315 642 storage/allocator_test.go:5373 s9 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:31.949368 642 storage/allocator_test.go:5373 s5 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:31.952745 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:31.954510 642 storage/allocator_test.go:5373 s1 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.955919 642 storage/allocator_test.go:5373 s0 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:31.958063 642 storage/allocator_test.go:5373 s11 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:31.959793 642 storage/allocator_test.go:5373 s14 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.960736 642 storage/allocator_test.go:5373 s4 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:31.962163 642 storage/allocator_test.go:5373 s13 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.963006 642 storage/allocator_test.go:5373 s10 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:31.964777 642 storage/allocator_test.go:5373 s16 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.965514 642 storage/allocator_test.go:5373 s13 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.966050 642 storage/allocator_test.go:5373 s7 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.966828 642 storage/allocator_test.go:5373 s13 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.967378 642 storage/allocator_test.go:5373 s11 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.968400 642 storage/allocator_test.go:5373 s10 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.969127 642 storage/allocator_test.go:5373 s11 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.969646 642 storage/allocator_test.go:5373 s7 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.970420 642 storage/allocator_test.go:5373 s11 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.970957 642 storage/allocator_test.go:5373 s11 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.972398 642 storage/allocator_test.go:5373 s11 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.973181 642 storage/allocator_test.go:5373 s11 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.973704 642 storage/allocator_test.go:5373 s13 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.974421 642 storage/allocator_test.go:5373 s13 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.974932 642 storage/allocator_test.go:5373 s13 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.975913 642 storage/allocator_test.go:5373 s10 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.976910 642 storage/allocator_test.go:5373 s11 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.977423 642 storage/allocator_test.go:5373 s10 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.978486 642 storage/allocator_test.go:5373 s0 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.978998 642 storage/allocator_test.go:5373 s1 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.981526 642 storage/allocator_test.go:5373 s10 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:31.983210 642 storage/allocator_test.go:5373 s19 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:31.983945 642 storage/allocator_test.go:5373 s10 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:31.985064 642 storage/allocator_test.go:5373 s10 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:31.985798 642 storage/allocator_test.go:5373 s16 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:31.987455 642 storage/allocator_test.go:5373 s16 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:31.989396 642 storage/allocator_test.go:5373 s8 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:31.990630 642 storage/allocator_test.go:5373 s1 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:31.993598 642 storage/allocator_test.go:5373 s16 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:31.994512 642 storage/allocator_test.go:5373 s8 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:31.997590 642 storage/allocator_test.go:5373 s3 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:31.999288 642 storage/allocator_test.go:5373 s8 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:31.999812 642 storage/allocator_test.go:5373 s8 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:32.000330 642 storage/allocator_test.go:5373 s6 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.000860 642 storage/allocator_test.go:5373 s19 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.003677 642 storage/allocator_test.go:5373 s14 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:32.005338 642 storage/allocator_test.go:5373 s18 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.005868 642 storage/allocator_test.go:5373 s8 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:32.006364 642 storage/allocator_test.go:5373 s19 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.006886 642 storage/allocator_test.go:5373 s15 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.011507 642 storage/allocator_test.go:5373 s3 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:32.014869 642 storage/allocator_test.go:5373 s4 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.015883 642 storage/allocator_test.go:5373 s19 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:32.016986 642 storage/allocator_test.go:5373 s15 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.018097 642 storage/allocator_test.go:5373 s15 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.023374 642 storage/allocator_test.go:5373 s4 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:32.027537 642 storage/allocator_test.go:5373 s4 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:32.028964 642 storage/allocator_test.go:5373 s18 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.029778 642 storage/allocator_test.go:5373 s17 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:32.030973 642 storage/allocator_test.go:5373 s4 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.032197 642 storage/allocator_test.go:5373 s18 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:32.032996 642 storage/allocator_test.go:5373 s4 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:32.034657 642 storage/allocator_test.go:5373 s7 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.041489 642 storage/allocator_test.go:5373 s13 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:32.042069 642 storage/allocator_test.go:5373 s13 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:32.042590 642 storage/allocator_test.go:5373 s5 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.043201 642 storage/allocator_test.go:5373 s12 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.043741 642 storage/allocator_test.go:5373 s14 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:32.044762 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.045322 642 storage/allocator_test.go:5373 s13 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:32.045912 642 storage/allocator_test.go:5373 s14 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:32.046505 642 storage/allocator_test.go:5373 s17 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.047026 642 storage/allocator_test.go:5373 s13 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:32.048366 642 storage/allocator_test.go:5373 s5 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:32.049918 642 storage/allocator_test.go:5373 s13 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.050389 642 storage/allocator_test.go:5373 s5 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.050918 642 storage/allocator_test.go:5373 s13 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:32.051442 642 storage/allocator_test.go:5373 s12 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:32.051994 642 storage/allocator_test.go:5373 s13 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.052533 642 storage/allocator_test.go:5373 s13 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.053074 642 storage/allocator_test.go:5373 s12 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:32.054061 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.054543 642 storage/allocator_test.go:5373 s17 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:32.055045 642 storage/allocator_test.go:5373 s12 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:32.055814 642 storage/allocator_test.go:5373 s12 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.056786 642 storage/allocator_test.go:5373 s12 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:32.058160 642 storage/allocator_test.go:5373 s14 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:32.059179 642 storage/allocator_test.go:5373 s14 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.059708 642 storage/allocator_test.go:5373 s12 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.060262 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:32.061179 642 storage/allocator_test.go:5373 s13 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:32.061975 642 storage/allocator_test.go:5373 s13 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.062697 642 storage/allocator_test.go:5373 s12 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.063427 642 storage/allocator_test.go:5373 s1 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:32.066690 642 storage/allocator_test.go:5373 s2 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.067723 642 storage/allocator_test.go:5373 s12 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:32.068623 642 storage/allocator_test.go:5373 s0 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:32.069674 642 storage/allocator_test.go:5373 s12 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.072599 642 storage/allocator_test.go:5373 s3 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:32.074280 642 storage/allocator_test.go:5373 s0 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.075118 642 storage/allocator_test.go:5373 s1 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.077119 642 storage/allocator_test.go:5373 s15 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.079051 642 storage/allocator_test.go:5373 s8 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.079611 642 storage/allocator_test.go:5373 s2 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.080135 642 storage/allocator_test.go:5373 s19 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:32.081286 642 storage/allocator_test.go:5373 s18 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:32.083932 642 storage/allocator_test.go:5373 s2 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.086765 642 storage/allocator_test.go:5373 s11 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.087326 642 storage/allocator_test.go:5373 s15 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.087835 642 storage/allocator_test.go:5373 s2 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:32.088940 642 storage/allocator_test.go:5373 s2 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:32.092550 642 storage/allocator_test.go:5373 s15 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.096362 642 storage/allocator_test.go:5373 s8 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.097142 642 storage/allocator_test.go:5373 s15 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.097924 642 storage/allocator_test.go:5373 s10 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:32.101253 642 storage/allocator_test.go:5373 s19 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:32.104364 642 storage/allocator_test.go:5373 s18 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.107539 642 storage/allocator_test.go:5373 s18 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.108057 642 storage/allocator_test.go:5373 s9 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.108561 642 storage/allocator_test.go:5373 s3 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:32.109062 642 storage/allocator_test.go:5373 s18 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:32.110314 642 storage/allocator_test.go:5373 s3 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.111354 642 storage/allocator_test.go:5373 s3 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.114015 642 storage/allocator_test.go:5373 s6 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.114593 642 storage/allocator_test.go:5373 s18 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.115117 642 storage/allocator_test.go:5373 s3 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:32.115624 642 storage/allocator_test.go:5373 s11 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:32.116877 642 storage/allocator_test.go:5373 s10 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.118489 642 storage/allocator_test.go:5373 s9 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.123756 642 storage/allocator_test.go:5373 s17 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.124821 642 storage/allocator_test.go:5373 s1 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.125962 642 storage/allocator_test.go:5373 s17 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:32.129745 642 storage/allocator_test.go:5373 s7 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.131278 642 storage/allocator_test.go:5373 s7 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.132241 642 storage/allocator_test.go:5373 s7 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:32.134230 642 storage/allocator_test.go:5373 s19 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:32.136008 642 storage/allocator_test.go:5373 s19 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.136672 642 storage/allocator_test.go:5373 s14 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:32.137227 642 storage/allocator_test.go:5373 s5 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.137723 642 storage/allocator_test.go:5373 s1 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.138896 642 storage/allocator_test.go:5373 s14 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:32.139443 642 storage/allocator_test.go:5373 s19 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.144627 642 storage/allocator_test.go:5373 s2 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:32.145395 642 storage/allocator_test.go:5373 s17 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.146245 642 storage/allocator_test.go:5373 s10 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.146986 642 storage/allocator_test.go:5373 s10 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.148447 642 storage/allocator_test.go:5373 s2 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.150811 642 storage/allocator_test.go:5373 s10 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.151473 642 storage/allocator_test.go:5373 s8 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.152405 642 storage/allocator_test.go:5373 s10 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:32.153332 642 storage/allocator_test.go:5373 s15 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.154221 642 storage/allocator_test.go:5373 s6 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.155127 642 storage/allocator_test.go:5373 s17 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.156825 642 storage/allocator_test.go:5373 s10 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.158848 642 storage/allocator_test.go:5373 s2 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.159436 642 storage/allocator_test.go:5373 s10 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.161375 642 storage/allocator_test.go:5373 s2 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:32.163952 642 storage/allocator_test.go:5373 s6 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.165267 642 storage/allocator_test.go:5373 s1 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.168537 642 storage/allocator_test.go:5373 s5 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.173415 642 storage/allocator_test.go:5373 s1 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.174265 642 storage/allocator_test.go:5373 s5 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.175887 642 storage/allocator_test.go:5373 s18 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.178072 642 storage/allocator_test.go:5373 s18 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.181708 642 storage/allocator_test.go:5373 s1 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.184242 642 storage/allocator_test.go:5373 s5 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.187872 642 storage/allocator_test.go:5373 s1 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.191907 642 storage/allocator_test.go:5373 s19 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:32.192747 642 storage/allocator_test.go:5373 s19 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.193536 642 storage/allocator_test.go:5373 s9 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:32.236016 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:32.238070 642 storage/allocator_test.go:5373 s3 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:32.239954 642 storage/allocator_test.go:5373 s3 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.245613 642 storage/allocator_test.go:5373 s15 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.246454 642 storage/allocator_test.go:5373 s15 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.247280 642 storage/allocator_test.go:5373 s3 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:32.248093 642 storage/allocator_test.go:5373 s7 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:32.248624 642 storage/allocator_test.go:5373 s3 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:32.249399 642 storage/allocator_test.go:5373 s18 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.249969 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:32.250502 642 storage/allocator_test.go:5373 s7 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:32.251948 642 storage/allocator_test.go:5373 s7 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.253680 642 storage/allocator_test.go:5373 s3 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.254562 642 storage/allocator_test.go:5373 s11 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.255435 642 storage/allocator_test.go:5373 s18 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:32.256277 642 storage/allocator_test.go:5373 s7 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:32.256805 642 storage/allocator_test.go:5373 s7 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:32.257660 642 storage/allocator_test.go:5373 s7 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.258252 642 storage/allocator_test.go:5373 s3 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:32.259231 642 storage/allocator_test.go:5373 s7 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:32.262514 642 storage/allocator_test.go:5373 s18 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.265434 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.266756 642 storage/allocator_test.go:5373 s0 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.268107 642 storage/allocator_test.go:5373 s12 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:32.269596 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:32.270566 642 storage/allocator_test.go:5373 s0 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:32.273554 642 storage/allocator_test.go:5373 s14 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.274850 642 storage/allocator_test.go:5373 s1 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.279600 642 storage/allocator_test.go:5373 s14 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.283095 642 storage/allocator_test.go:5373 s10 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.284502 642 storage/allocator_test.go:5373 s17 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.288767 642 storage/allocator_test.go:5373 s0 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.291541 642 storage/allocator_test.go:5373 s14 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.294060 642 storage/allocator_test.go:5373 s5 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.298882 642 storage/allocator_test.go:5373 s19 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:32.300934 642 storage/allocator_test.go:5373 s16 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.302253 642 storage/allocator_test.go:5373 s16 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:32.303274 642 storage/allocator_test.go:5373 s2 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.304472 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.306603 642 storage/allocator_test.go:5373 s10 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:32.308294 642 storage/allocator_test.go:5373 s16 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:32.309372 642 storage/allocator_test.go:5373 s5 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.310927 642 storage/allocator_test.go:5373 s2 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.313589 642 storage/allocator_test.go:5373 s12 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:32.318205 642 storage/allocator_test.go:5373 s12 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:32.325281 642 storage/allocator_test.go:5373 s16 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.329087 642 storage/allocator_test.go:5373 s10 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:32.332361 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.333361 642 storage/allocator_test.go:5373 s3 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.334447 642 storage/allocator_test.go:5373 s3 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.336423 642 storage/allocator_test.go:5373 s3 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.337809 642 storage/allocator_test.go:5373 s14 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:32.346325 642 storage/allocator_test.go:5373 s1 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.352422 642 storage/allocator_test.go:5373 s6 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.361063 642 storage/allocator_test.go:5373 s14 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.437295 642 storage/allocator_test.go:5373 s17 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.438528 642 storage/allocator_test.go:5373 s9 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.440568 642 storage/allocator_test.go:5373 s11 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:32.447348 642 storage/allocator_test.go:5373 s17 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.448562 642 storage/allocator_test.go:5373 s8 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.450549 642 storage/allocator_test.go:5373 s8 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:32.460138 642 storage/allocator_test.go:5373 s11 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.462078 642 storage/allocator_test.go:5373 s11 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.465532 642 storage/allocator_test.go:5373 s13 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:32.653323 642 storage/allocator_test.go:5373 s15 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:32.654289 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.655435 642 storage/allocator_test.go:5373 s4 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.656777 642 storage/allocator_test.go:5373 s12 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:32.658310 642 storage/allocator_test.go:5373 s4 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:32.659423 642 storage/allocator_test.go:5373 s3 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:32.660653 642 storage/allocator_test.go:5373 s7 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.661600 642 storage/allocator_test.go:5373 s15 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:32.662968 642 storage/allocator_test.go:5373 s1 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.664946 642 storage/allocator_test.go:5373 s7 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:32.665912 642 storage/allocator_test.go:5373 s15 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:32.666561 642 storage/allocator_test.go:5373 s15 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.667703 642 storage/allocator_test.go:5373 s3 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.668428 642 storage/allocator_test.go:5373 s3 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:32.669334 642 storage/allocator_test.go:5373 s3 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.670411 642 storage/allocator_test.go:5373 s3 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.671041 642 storage/allocator_test.go:5373 s12 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:32.671959 642 storage/allocator_test.go:5373 s4 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:32.672675 642 storage/allocator_test.go:5373 s3 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:32.673458 642 storage/allocator_test.go:5373 s3 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.674294 642 storage/allocator_test.go:5373 s3 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:32.675454 642 storage/allocator_test.go:5373 s4 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.676516 642 storage/allocator_test.go:5373 s12 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:32.677104 642 storage/allocator_test.go:5373 s12 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:32.677746 642 storage/allocator_test.go:5373 s3 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.678313 642 storage/allocator_test.go:5373 s3 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.678908 642 storage/allocator_test.go:5373 s4 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:32.680501 642 storage/allocator_test.go:5373 s4 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.682482 642 storage/allocator_test.go:5373 s2 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.683392 642 storage/allocator_test.go:5373 s12 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:32.685608 642 storage/allocator_test.go:5373 s5 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:32.686522 642 storage/allocator_test.go:5373 s5 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.687379 642 storage/allocator_test.go:5373 s15 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:32.689058 642 storage/allocator_test.go:5373 s5 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.691875 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:32.693152 642 storage/allocator_test.go:5373 s9 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:32.694423 642 storage/allocator_test.go:5373 s5 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.695695 642 storage/allocator_test.go:5373 s8 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.697374 642 storage/allocator_test.go:5373 s6 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:32.698662 642 storage/allocator_test.go:5373 s16 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.699477 642 storage/allocator_test.go:5373 s13 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:32.701172 642 storage/allocator_test.go:5373 s18 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.703670 642 storage/allocator_test.go:5373 s18 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.705291 642 storage/allocator_test.go:5373 s13 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:32.708204 642 storage/allocator_test.go:5373 s6 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:32.709427 642 storage/allocator_test.go:5373 s8 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.711004 642 storage/allocator_test.go:5373 s11 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:32.713554 642 storage/allocator_test.go:5373 s18 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.716083 642 storage/allocator_test.go:5373 s6 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.717804 642 storage/allocator_test.go:5373 s2 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:32.721178 642 storage/allocator_test.go:5373 s2 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:32.723312 642 storage/allocator_test.go:5373 s0 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.724539 642 storage/allocator_test.go:5373 s18 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:32.727433 642 storage/allocator_test.go:5373 s1 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:32.731844 642 storage/allocator_test.go:5373 s1 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.733537 642 storage/allocator_test.go:5373 s17 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:32.737026 642 storage/allocator_test.go:5373 s17 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.737636 642 storage/allocator_test.go:5373 s17 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.738215 642 storage/allocator_test.go:5373 s11 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:32.738981 642 storage/allocator_test.go:5373 s16 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.739794 642 storage/allocator_test.go:5373 s19 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:32.740976 642 storage/allocator_test.go:5373 s0 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:32.742985 642 storage/allocator_test.go:5373 s0 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.743932 642 storage/allocator_test.go:5373 s1 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.746038 642 storage/allocator_test.go:5373 s10 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.747368 642 storage/allocator_test.go:5373 s1 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.748177 642 storage/allocator_test.go:5373 s10 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.748878 642 storage/allocator_test.go:5373 s11 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:32.749497 642 storage/allocator_test.go:5373 s0 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.750193 642 storage/allocator_test.go:5373 s19 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:32.751302 642 storage/allocator_test.go:5373 s14 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:32.754437 642 storage/allocator_test.go:5373 s11 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.755673 642 storage/allocator_test.go:5373 s17 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.758254 642 storage/allocator_test.go:5373 s0 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.761390 642 storage/allocator_test.go:5373 s9 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:32.762729 642 storage/allocator_test.go:5373 s19 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.764077 642 storage/allocator_test.go:5373 s10 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:32.765335 642 storage/allocator_test.go:5373 s15 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:32.766695 642 storage/allocator_test.go:5373 s19 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:32.768763 642 storage/allocator_test.go:5373 s6 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:32.775169 642 storage/allocator_test.go:5373 s6 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:32.776454 642 storage/allocator_test.go:5373 s2 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:32.780688 642 storage/allocator_test.go:5373 s13 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:32.782699 642 storage/allocator_test.go:5373 s8 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:32.783629 642 storage/allocator_test.go:5373 s13 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:32.784848 642 storage/allocator_test.go:5373 s8 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:32.787767 642 storage/allocator_test.go:5373 s12 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:32.788557 642 storage/allocator_test.go:5373 s5 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:32.789386 642 storage/allocator_test.go:5373 s4 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:32.791094 642 storage/allocator_test.go:5373 s13 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:32.791947 642 storage/allocator_test.go:5373 s2 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:32.792832 642 storage/allocator_test.go:5373 s4 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:32.793644 642 storage/allocator_test.go:5373 s12 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:32.794916 642 storage/allocator_test.go:5373 s13 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:32.800156 642 storage/allocator_test.go:5373 s18 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:32.808911 642 storage/allocator_test.go:5373 s14 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:32.817355 642 storage/allocator_test.go:5373 s12 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:33.218487 642 storage/allocator_test.go:5373 s0 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:33.219098 642 storage/allocator_test.go:5373 s9 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:33.220337 642 storage/allocator_test.go:5373 s4 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:33.221032 642 storage/allocator_test.go:5373 s4 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:33.221510 642 storage/allocator_test.go:5373 s0 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:33.222686 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:33.223236 642 storage/allocator_test.go:5373 s4 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:33.224027 642 storage/allocator_test.go:5373 s0 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:33.225227 642 storage/allocator_test.go:5373 s9 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:33.225847 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:33.226655 642 storage/allocator_test.go:5373 s13 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:33.227535 642 storage/allocator_test.go:5373 s4 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:33.228350 642 storage/allocator_test.go:5373 s14 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:33.229554 642 storage/allocator_test.go:5373 s0 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:33.230371 642 storage/allocator_test.go:5373 s13 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:33.231999 642 storage/allocator_test.go:5373 s4 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:33.232806 642 storage/allocator_test.go:5373 s3 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:33.233626 642 storage/allocator_test.go:5373 s0 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:33.235227 642 storage/allocator_test.go:5373 s4 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:33.236050 642 storage/allocator_test.go:5373 s8 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:33.236863 642 storage/allocator_test.go:5373 s4 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:33.238455 642 storage/allocator_test.go:5373 s14 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:33.239289 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:33.240091 642 storage/allocator_test.go:5373 s8 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:33.240923 642 storage/allocator_test.go:5373 s14 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:33.241751 642 storage/allocator_test.go:5373 s0 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:33.243624 642 storage/allocator_test.go:5373 s4 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:33.247831 642 storage/allocator_test.go:5373 s1 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:33.248975 642 storage/allocator_test.go:5373 s1 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:33.252795 642 storage/allocator_test.go:5373 s2 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:33.256295 642 storage/allocator_test.go:5373 s11 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:33.260013 642 storage/allocator_test.go:5373 s5 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:33.261173 642 storage/allocator_test.go:5373 s7 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:33.262179 642 storage/allocator_test.go:5373 s12 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:33.263404 642 storage/allocator_test.go:5373 s1 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:33.264042 642 storage/allocator_test.go:5373 s19 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:33.265516 642 storage/allocator_test.go:5373 s17 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:33.266150 642 storage/allocator_test.go:5373 s7 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:33.267659 642 storage/allocator_test.go:5373 s5 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:33.268245 642 storage/allocator_test.go:5373 s12 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:33.270073 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:33.271120 642 storage/allocator_test.go:5373 s10 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:33.271707 642 storage/allocator_test.go:5373 s1 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:33.272990 642 storage/allocator_test.go:5373 s19 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:33.273505 642 storage/allocator_test.go:5373 s11 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:33.274802 642 storage/allocator_test.go:5373 s15 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:33.275420 642 storage/allocator_test.go:5373 s15 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:33.277430 642 storage/allocator_test.go:5373 s19 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:33.279724 642 storage/allocator_test.go:5373 s7 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:33.280624 642 storage/allocator_test.go:5373 s5 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:33.284859 642 storage/allocator_test.go:5373 s18 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:33.288836 642 storage/allocator_test.go:5373 s16 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:33.290067 642 storage/allocator_test.go:5373 s6 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:33.294958 642 storage/allocator_test.go:5373 s9 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:33.295961 642 storage/allocator_test.go:5373 s18 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:33.296809 642 storage/allocator_test.go:5373 s14 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:33.298337 642 storage/allocator_test.go:5373 s9 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:33.300083 642 storage/allocator_test.go:5373 s6 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:33.301816 642 storage/allocator_test.go:5373 s3 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:33.304082 642 storage/allocator_test.go:5373 s13 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:33.305025 642 storage/allocator_test.go:5373 s6 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:33.306331 642 storage/allocator_test.go:5373 s9 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:33.307923 642 storage/allocator_test.go:5373 s13 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:33.308467 642 storage/allocator_test.go:5373 s18 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:33.309116 642 storage/allocator_test.go:5373 s14 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:33.311318 642 storage/allocator_test.go:5373 s17 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:33.312538 642 storage/allocator_test.go:5373 s3 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:33.313380 642 storage/allocator_test.go:5373 s6 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:33.314621 642 storage/allocator_test.go:5373 s3 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:33.316726 642 storage/allocator_test.go:5373 s8 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:33.317506 642 storage/allocator_test.go:5373 s13 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:33.319184 642 storage/allocator_test.go:5373 s3 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:33.320680 642 storage/allocator_test.go:5373 s14 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:33.321304 642 storage/allocator_test.go:5373 s14 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:33.322398 642 storage/allocator_test.go:5373 s3 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:33.323527 642 storage/allocator_test.go:5373 s13 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:33.325082 642 storage/allocator_test.go:5373 s18 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:33.326025 642 storage/allocator_test.go:5373 s2 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:33.327700 642 storage/allocator_test.go:5373 s1 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:33.330511 642 storage/allocator_test.go:5373 s2 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:33.331462 642 storage/allocator_test.go:5373 s18 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:33.339972 642 storage/allocator_test.go:5373 s5 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:33.341205 642 storage/allocator_test.go:5373 s11 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:33.342498 642 storage/allocator_test.go:5373 s2 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:33.343128 642 storage/allocator_test.go:5373 s11 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:33.343984 642 storage/allocator_test.go:5373 s7 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:33.345507 642 storage/allocator_test.go:5373 s11 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:33.350450 642 storage/allocator_test.go:5373 s15 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:33.352695 642 storage/allocator_test.go:5373 s2 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:33.353623 642 storage/allocator_test.go:5373 s2 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:33.355253 642 storage/allocator_test.go:5373 s2 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:33.356073 642 storage/allocator_test.go:5373 s2 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:33.364270 642 storage/allocator_test.go:5373 s5 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:33.373550 642 storage/allocator_test.go:5373 s15 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:33.380348 642 storage/allocator_test.go:5373 s1 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:33.394269 642 storage/allocator_test.go:5373 s19 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:33.402886 642 storage/allocator_test.go:5373 s1 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:33.412753 642 storage/allocator_test.go:5373 s9 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:33.415367 642 storage/allocator_test.go:5373 s9 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:33.416845 642 storage/allocator_test.go:5373 s19 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:33.579186 642 storage/allocator_test.go:5373 s12 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:33.579767 642 storage/allocator_test.go:5373 s0 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:33.580841 642 storage/allocator_test.go:5373 s15 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:33.581448 642 storage/allocator_test.go:5373 s12 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:33.582262 642 storage/allocator_test.go:5373 s15 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:33.582885 642 storage/allocator_test.go:5373 s10 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:33.583471 642 storage/allocator_test.go:5373 s15 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:33.584647 642 storage/allocator_test.go:5373 s15 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:33.585820 642 storage/allocator_test.go:5373 s17 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:33.586547 642 storage/allocator_test.go:5373 s0 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:33.588510 642 storage/allocator_test.go:5373 s19 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:33.590021 642 storage/allocator_test.go:5373 s0 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:33.590948 642 storage/allocator_test.go:5373 s0 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:33.592508 642 storage/allocator_test.go:5373 s0 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:33.593392 642 storage/allocator_test.go:5373 s16 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:33.594240 642 storage/allocator_test.go:5373 s19 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:33.595173 642 storage/allocator_test.go:5373 s15 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:33.596037 642 storage/allocator_test.go:5373 s12 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:33.597302 642 storage/allocator_test.go:5373 s17 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:33.598642 642 storage/allocator_test.go:5373 s17 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:33.599521 642 storage/allocator_test.go:5373 s15 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:33.601646 642 storage/allocator_test.go:5373 s12 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:33.603778 642 storage/allocator_test.go:5373 s15 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:33.605046 642 storage/allocator_test.go:5373 s15 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:33.609219 642 storage/allocator_test.go:5373 s1 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:33.610884 642 storage/allocator_test.go:5373 s17 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:33.615912 642 storage/allocator_test.go:5373 s5 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:33.623258 642 storage/allocator_test.go:5373 s14 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:33.631132 642 storage/allocator_test.go:5373 s2 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:33.633673 642 storage/allocator_test.go:5373 s13 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:33.639257 642 storage/allocator_test.go:5373 s6 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:33.641628 642 storage/allocator_test.go:5373 s3 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:33.653123 642 storage/allocator_test.go:5373 s3 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:33.662964 642 storage/allocator_test.go:5373 s1 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:33.670045 642 storage/allocator_test.go:5373 s9 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:33.682759 642 storage/allocator_test.go:5373 s7 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:34.352318 642 storage/allocator_test.go:5373 s18 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:34.353704 642 storage/allocator_test.go:5373 s6 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:34.354494 642 storage/allocator_test.go:5373 s18 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:34.356648 642 storage/allocator_test.go:5373 s6 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:34.357230 642 storage/allocator_test.go:5373 s18 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:34.357925 642 storage/allocator_test.go:5373 s6 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:34.358641 642 storage/allocator_test.go:5373 s18 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:34.359490 642 storage/allocator_test.go:5373 s8 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:34.360306 642 storage/allocator_test.go:5373 s19 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:34.361423 642 storage/allocator_test.go:5373 s18 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:34.362013 642 storage/allocator_test.go:5373 s6 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:34.363765 642 storage/allocator_test.go:5373 s6 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:34.364541 642 storage/allocator_test.go:5373 s15 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:34.365361 642 storage/allocator_test.go:5373 s19 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:34.369407 642 storage/allocator_test.go:5373 s15 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:34.370167 642 storage/allocator_test.go:5373 s19 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:34.370955 642 storage/allocator_test.go:5373 s1 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:34.371617 642 storage/allocator_test.go:5373 s15 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:34.372309 642 storage/allocator_test.go:5373 s6 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:34.372920 642 storage/allocator_test.go:5373 s18 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:34.373952 642 storage/allocator_test.go:5373 s6 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:34.374724 642 storage/allocator_test.go:5373 s6 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:34.377484 642 storage/allocator_test.go:5373 s18 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:34.378685 642 storage/allocator_test.go:5373 s8 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:34.379798 642 storage/allocator_test.go:5373 s18 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:34.386736 642 storage/allocator_test.go:5373 s9 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:34.394101 642 storage/allocator_test.go:5373 s2 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:34.402081 642 storage/allocator_test.go:5373 s2 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:34.410855 642 storage/allocator_test.go:5373 s7 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:34.416721 642 storage/allocator_test.go:5373 s2 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:34.422433 642 storage/allocator_test.go:5373 s5 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:34.430761 642 storage/allocator_test.go:5373 s13 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:34.582297 642 storage/allocator_test.go:5373 s0 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:34.583163 642 storage/allocator_test.go:5373 s11 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:34.584036 642 storage/allocator_test.go:5373 s16 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:34.584913 642 storage/allocator_test.go:5373 s11 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:34.586256 642 storage/allocator_test.go:5373 s11 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:34.587153 642 storage/allocator_test.go:5373 s11 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:34.588012 642 storage/allocator_test.go:5373 s11 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:34.590693 642 storage/allocator_test.go:5373 s12 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:34.591405 642 storage/allocator_test.go:5373 s11 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:34.592612 642 storage/allocator_test.go:5373 s0 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:34.593189 642 storage/allocator_test.go:5373 s0 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:34.593772 642 storage/allocator_test.go:5373 s11 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:34.594744 642 storage/allocator_test.go:5373 s0 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:34.595375 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:34.595974 642 storage/allocator_test.go:5373 s9 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:34.596874 642 storage/allocator_test.go:5373 s11 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:34.597996 642 storage/allocator_test.go:5373 s11 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:34.598691 642 storage/allocator_test.go:5373 s0 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:34.599481 642 storage/allocator_test.go:5373 s15 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:34.601250 642 storage/allocator_test.go:5373 s9 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:34.601994 642 storage/allocator_test.go:5373 s11 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:34.603295 642 storage/allocator_test.go:5373 s16 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:34.603914 642 storage/allocator_test.go:5373 s10 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:34.604462 642 storage/allocator_test.go:5373 s11 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:34.606257 642 storage/allocator_test.go:5373 s12 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:34.607638 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:34.608812 642 storage/allocator_test.go:5373 s16 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:34.609740 642 storage/allocator_test.go:5373 s2 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:34.611310 642 storage/allocator_test.go:5373 s2 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:34.612223 642 storage/allocator_test.go:5373 s4 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:34.627483 642 storage/allocator_test.go:5373 s14 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:34.630222 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:34.635443 642 storage/allocator_test.go:5373 s7 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:34.637253 642 storage/allocator_test.go:5373 s6 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:34.644811 642 storage/allocator_test.go:5373 s14 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:34.648884 642 storage/allocator_test.go:5373 s8 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:34.654439 642 storage/allocator_test.go:5373 s3 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:34.663114 642 storage/allocator_test.go:5373 s13 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:34.676213 642 storage/allocator_test.go:5373 s1 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:34.809866 642 storage/allocator_test.go:5373 s15 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:34.811204 642 storage/allocator_test.go:5373 s2 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:34.819465 642 storage/allocator_test.go:5373 s3 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:34.820924 642 storage/allocator_test.go:5373 s4 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:34.834066 642 storage/allocator_test.go:5373 s5 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:35.024806 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:35.028558 642 storage/allocator_test.go:5373 s8 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:35.031634 642 storage/allocator_test.go:5373 s18 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:35.032448 642 storage/allocator_test.go:5373 s7 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:35.033275 642 storage/allocator_test.go:5373 s7 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:35.034508 642 storage/allocator_test.go:5373 s13 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:35.035361 642 storage/allocator_test.go:5373 s7 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:35.036176 642 storage/allocator_test.go:5373 s8 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:35.037827 642 storage/allocator_test.go:5373 s8 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:35.038650 642 storage/allocator_test.go:5373 s18 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:35.042786 642 storage/allocator_test.go:5373 s8 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:35.043648 642 storage/allocator_test.go:5373 s7 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:35.044435 642 storage/allocator_test.go:5373 s7 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:35.045262 642 storage/allocator_test.go:5373 s8 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:35.046478 642 storage/allocator_test.go:5373 s7 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:35.047273 642 storage/allocator_test.go:5373 s7 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:35.048094 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:35.052570 642 storage/allocator_test.go:5373 s13 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:35.053372 642 storage/allocator_test.go:5373 s8 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:35.060330 642 storage/allocator_test.go:5373 s1 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:35.072142 642 storage/allocator_test.go:5373 s0 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:35.079469 642 storage/allocator_test.go:5373 s3 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:35.088962 642 storage/allocator_test.go:5373 s14 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:35.457445 642 storage/allocator_test.go:5373 s11 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:35.458325 642 storage/allocator_test.go:5373 s6 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:35.459272 642 storage/allocator_test.go:5373 s19 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:35.460523 642 storage/allocator_test.go:5373 s6 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:35.461404 642 storage/allocator_test.go:5373 s19 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:35.462697 642 storage/allocator_test.go:5373 s10 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:35.463500 642 storage/allocator_test.go:5373 s11 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:35.467318 642 storage/allocator_test.go:5373 s11 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:35.469470 642 storage/allocator_test.go:5373 s16 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:35.470314 642 storage/allocator_test.go:5373 s16 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:35.470975 642 storage/allocator_test.go:5373 s10 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:35.472088 642 storage/allocator_test.go:5373 s11 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:35.472735 642 storage/allocator_test.go:5373 s11 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:35.473534 642 storage/allocator_test.go:5373 s19 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:35.474196 642 storage/allocator_test.go:5373 s16 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:35.474887 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:35.475883 642 storage/allocator_test.go:5373 s16 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:35.476478 642 storage/allocator_test.go:5373 s19 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:35.477373 642 storage/allocator_test.go:5373 s6 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:35.478136 642 storage/allocator_test.go:5373 s16 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:35.478816 642 storage/allocator_test.go:5373 s16 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:35.480745 642 storage/allocator_test.go:5373 s6 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:35.481402 642 storage/allocator_test.go:5373 s11 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:35.482052 642 storage/allocator_test.go:5373 s6 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:35.482941 642 storage/allocator_test.go:5373 s16 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:35.483563 642 storage/allocator_test.go:5373 s6 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:35.484708 642 storage/allocator_test.go:5373 s16 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:35.485679 642 storage/allocator_test.go:5373 s6 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:35.486520 642 storage/allocator_test.go:5373 s0 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:35.489718 642 storage/allocator_test.go:5373 s16 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:35.490915 642 storage/allocator_test.go:5373 s16 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:35.494893 642 storage/allocator_test.go:5373 s5 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:35.495906 642 storage/allocator_test.go:5373 s16 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:35.496840 642 storage/allocator_test.go:5373 s1 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:35.498410 642 storage/allocator_test.go:5373 s9 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:35.503728 642 storage/allocator_test.go:5373 s0 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:35.507664 642 storage/allocator_test.go:5373 s9 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:35.510181 642 storage/allocator_test.go:5373 s15 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:35.514778 642 storage/allocator_test.go:5373 s17 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:35.517200 642 storage/allocator_test.go:5373 s18 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:35.519673 642 storage/allocator_test.go:5373 s9 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:35.526490 642 storage/allocator_test.go:5373 s2 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:35.530900 642 storage/allocator_test.go:5373 s3 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:35.535387 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:35.547451 642 storage/allocator_test.go:5373 s1 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:35.557620 642 storage/allocator_test.go:5373 s1 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:35.570856 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:35.686776 642 storage/allocator_test.go:5373 s8 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:35.688151 642 storage/allocator_test.go:5373 s7 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:35.689325 642 storage/allocator_test.go:5373 s13 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:35.690144 642 storage/allocator_test.go:5373 s13 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:35.691566 642 storage/allocator_test.go:5373 s15 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:35.692478 642 storage/allocator_test.go:5373 s8 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:35.693178 642 storage/allocator_test.go:5373 s17 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:35.693811 642 storage/allocator_test.go:5373 s13 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:35.695163 642 storage/allocator_test.go:5373 s1 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:35.695998 642 storage/allocator_test.go:5373 s14 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:35.696517 642 storage/allocator_test.go:5373 s4 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:35.697223 642 storage/allocator_test.go:5373 s17 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:35.698440 642 storage/allocator_test.go:5373 s8 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:35.699684 642 storage/allocator_test.go:5373 s14 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:35.700367 642 storage/allocator_test.go:5373 s8 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:35.701885 642 storage/allocator_test.go:5373 s8 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:35.702541 642 storage/allocator_test.go:5373 s13 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:35.703258 642 storage/allocator_test.go:5373 s14 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:35.703938 642 storage/allocator_test.go:5373 s4 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:35.705256 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:35.706116 642 storage/allocator_test.go:5373 s14 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:35.706672 642 storage/allocator_test.go:5373 s8 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:35.707368 642 storage/allocator_test.go:5373 s13 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:35.710471 642 storage/allocator_test.go:5373 s0 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:35.718100 642 storage/allocator_test.go:5373 s14 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:35.719752 642 storage/allocator_test.go:5373 s17 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:35.806776 642 storage/allocator_test.go:5373 s10 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:35.807644 642 storage/allocator_test.go:5373 s11 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:35.809103 642 storage/allocator_test.go:5373 s12 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:35.810304 642 storage/allocator_test.go:5373 s11 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:35.811542 642 storage/allocator_test.go:5373 s10 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:35.812374 642 storage/allocator_test.go:5373 s7 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:35.815186 642 storage/allocator_test.go:5373 s3 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:35.816357 642 storage/allocator_test.go:5373 s7 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:35.817448 642 storage/allocator_test.go:5373 s10 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:35.818097 642 storage/allocator_test.go:5373 s10 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:35.818872 642 storage/allocator_test.go:5373 s10 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:35.820249 642 storage/allocator_test.go:5373 s18 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:35.820948 642 storage/allocator_test.go:5373 s10 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:35.822162 642 storage/allocator_test.go:5373 s3 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:35.823257 642 storage/allocator_test.go:5373 s10 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:35.824064 642 storage/allocator_test.go:5373 s10 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:35.824619 642 storage/allocator_test.go:5373 s10 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:35.826085 642 storage/allocator_test.go:5373 s11 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:35.826701 642 storage/allocator_test.go:5373 s12 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:35.827252 642 storage/allocator_test.go:5373 s11 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:35.828035 642 storage/allocator_test.go:5373 s3 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:35.828654 642 storage/allocator_test.go:5373 s11 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:35.829694 642 storage/allocator_test.go:5373 s11 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:35.830759 642 storage/allocator_test.go:5373 s11 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:35.835769 642 storage/allocator_test.go:5373 s1 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:35.848510 642 storage/allocator_test.go:5373 s14 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:35.849409 642 storage/allocator_test.go:5373 s2 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:35.859223 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:35.859943 642 storage/allocator_test.go:5373 s9 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:35.869886 642 storage/allocator_test.go:5373 s17 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:35.870823 642 storage/allocator_test.go:5373 s18 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:36.242804 642 storage/allocator_test.go:5373 s14 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.247534 642 storage/allocator_test.go:5373 s14 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.252166 642 storage/allocator_test.go:5373 s0 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.256952 642 storage/allocator_test.go:5373 s3 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.262638 642 storage/allocator_test.go:5373 s0 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.270836 642 storage/allocator_test.go:5373 s14 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.276409 642 storage/allocator_test.go:5373 s0 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.282156 642 storage/allocator_test.go:5373 s1 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.286751 642 storage/allocator_test.go:5373 s1 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.291243 642 storage/allocator_test.go:5373 s1 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.296145 642 storage/allocator_test.go:5373 s1 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.302767 642 storage/allocator_test.go:5373 s16 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.308535 642 storage/allocator_test.go:5373 s6 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.312144 642 storage/allocator_test.go:5373 s9 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.317058 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.323171 642 storage/allocator_test.go:5373 s14 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.327218 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.334024 642 storage/allocator_test.go:5373 s3 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.342564 642 storage/allocator_test.go:5373 s15 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.346030 642 storage/allocator_test.go:5373 s19 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.355383 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:36.356275 642 storage/allocator_test.go:5373 s12 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:36.357455 642 storage/allocator_test.go:5373 s2 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:36.358212 642 storage/allocator_test.go:5373 s2 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:36.359450 642 storage/allocator_test.go:5373 s2 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:36.360248 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.361037 642 storage/allocator_test.go:5373 s2 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:36.361619 642 storage/allocator_test.go:5373 s12 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:36.362086 642 storage/allocator_test.go:5373 s2 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.362541 642 storage/allocator_test.go:5373 s12 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:36.364433 642 storage/allocator_test.go:5373 s2 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:36.365741 642 storage/allocator_test.go:5373 s2 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:36.366990 642 storage/allocator_test.go:5373 s12 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:36.367804 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.368392 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.368903 642 storage/allocator_test.go:5373 s12 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:36.369370 642 storage/allocator_test.go:5373 s12 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:36.369879 642 storage/allocator_test.go:5373 s12 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:36.370506 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.371391 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.372002 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.372870 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.373438 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.374086 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.374771 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.375410 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.376084 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.377346 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.378281 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.379149 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.380267 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.381237 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.382235 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.383244 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.384206 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.385723 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.387857 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.390296 642 storage/allocator_test.go:5373 s0 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:36.391380 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.392549 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.393624 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.394566 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.395562 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.397027 642 storage/allocator_test.go:5373 s0 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:36.398935 642 storage/allocator_test.go:5373 s5 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:36.402367 642 storage/allocator_test.go:5373 s13 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:36.403198 642 storage/allocator_test.go:5373 s13 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.407132 642 storage/allocator_test.go:5373 s13 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.408996 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.417102 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.418127 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.422643 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.424561 642 storage/allocator_test.go:5373 s5 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:36.427641 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.428994 642 storage/allocator_test.go:5373 s19 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.437043 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.440501 642 storage/allocator_test.go:5373 s4 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:36.444010 642 storage/allocator_test.go:5373 s10 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:36.444753 642 storage/allocator_test.go:5373 s4 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:36.445485 642 storage/allocator_test.go:5373 s6 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.448294 642 storage/allocator_test.go:5373 s6 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.448873 642 storage/allocator_test.go:5373 s9 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:36.449977 642 storage/allocator_test.go:5373 s9 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:36.451228 642 storage/allocator_test.go:5373 s9 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:36.451776 642 storage/allocator_test.go:5373 s15 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:36.452387 642 storage/allocator_test.go:5373 s4 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.455389 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.456214 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.457739 642 storage/allocator_test.go:5373 s10 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:36.460512 642 storage/allocator_test.go:5373 s15 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:36.467937 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.469062 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.475855 642 storage/allocator_test.go:5373 s15 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.476742 642 storage/allocator_test.go:5373 s10 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:36.478696 642 storage/allocator_test.go:5369 s15 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.481275 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:36.482018 642 storage/allocator_test.go:5373 s19 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:36.482616 642 storage/allocator_test.go:5373 s3 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.483639 642 storage/allocator_test.go:5373 s17 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:36.484432 642 storage/allocator_test.go:5373 s19 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:36.485337 642 storage/allocator_test.go:5373 s11 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:36.486213 642 storage/allocator_test.go:5373 s18 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.486719 642 storage/allocator_test.go:5373 s7 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:36.487554 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.488069 642 storage/allocator_test.go:5373 s11 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:36.495637 642 storage/allocator_test.go:5373 s16 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:36.496831 642 storage/allocator_test.go:5373 s16 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.498145 642 storage/allocator_test.go:5373 s2 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.498992 642 storage/allocator_test.go:5373 s2 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:36.503634 642 storage/allocator_test.go:5373 s2 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:36.505435 642 storage/allocator_test.go:5373 s0 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.506724 642 storage/allocator_test.go:5373 s13 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.507422 642 storage/allocator_test.go:5373 s12 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:36.513084 642 storage/allocator_test.go:5373 s13 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:36.518883 642 storage/allocator_test.go:5373 s16 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.521426 642 storage/allocator_test.go:5373 s1 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.522308 642 storage/allocator_test.go:5373 s13 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:36.532438 642 storage/allocator_test.go:5373 s1 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:36.534257 642 storage/allocator_test.go:5373 s16 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.535395 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.535980 642 storage/allocator_test.go:5373 s12 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:36.539095 642 storage/allocator_test.go:5373 s0 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:36.540400 642 storage/allocator_test.go:5373 s16 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.541568 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.542302 642 storage/allocator_test.go:5373 s12 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:36.547004 642 storage/allocator_test.go:5373 s12 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:36.550389 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.552748 642 storage/allocator_test.go:5373 s16 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.556076 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.563306 642 storage/allocator_test.go:5373 s8 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:36.564902 642 storage/allocator_test.go:5373 s6 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:36.566794 642 storage/allocator_test.go:5373 s18 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.569262 642 storage/allocator_test.go:5373 s17 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.570767 642 storage/allocator_test.go:5373 s2 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:36.571669 642 storage/allocator_test.go:5373 s14 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.573987 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:36.575695 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.577211 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.578515 642 storage/allocator_test.go:5373 s18 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.579697 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.580299 642 storage/allocator_test.go:5373 s14 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:36.580935 642 storage/allocator_test.go:5373 s8 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.582763 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:36.584734 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.586891 642 storage/allocator_test.go:5373 s6 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:36.589412 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.594543 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.595751 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.596764 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.599446 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.613750 642 storage/allocator_test.go:5373 s3 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:36.614444 642 storage/allocator_test.go:5373 s3 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:36.615276 642 storage/allocator_test.go:5373 s3 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.616341 642 storage/allocator_test.go:5373 s11 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:36.617061 642 storage/allocator_test.go:5373 s3 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:36.617934 642 storage/allocator_test.go:5373 s3 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:36.619043 642 storage/allocator_test.go:5373 s3 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:36.620439 642 storage/allocator_test.go:5373 s7 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:36.622679 642 storage/allocator_test.go:5373 s11 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.623468 642 storage/allocator_test.go:5373 s11 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:36.624353 642 storage/allocator_test.go:5373 s3 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:36.625120 642 storage/allocator_test.go:5373 s3 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:36.626057 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.626906 642 storage/allocator_test.go:5373 s11 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:36.627929 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.628679 642 storage/allocator_test.go:5373 s7 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:36.629369 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:36.632234 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.633325 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.634510 642 storage/allocator_test.go:5373 s11 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:36.635557 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.636426 642 storage/allocator_test.go:5373 s7 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:36.637629 642 storage/allocator_test.go:5373 s11 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:36.638504 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.639931 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.640551 642 storage/allocator_test.go:5373 s11 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:36.641105 642 storage/allocator_test.go:5373 s7 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:36.641531 642 storage/allocator_test.go:5373 s7 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:36.642105 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.642733 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.643349 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.643932 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.644527 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.645487 642 storage/allocator_test.go:5373 s7 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:36.646661 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.648303 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.649222 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.650266 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.652326 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.653443 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.655950 642 storage/allocator_test.go:5373 s10 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.659705 642 storage/allocator_test.go:5373 s10 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:36.661865 642 storage/allocator_test.go:5373 s10 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:36.662861 642 storage/allocator_test.go:5373 s10 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:36.665023 642 storage/allocator_test.go:5373 s10 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.665655 642 storage/allocator_test.go:5373 s10 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:36.666737 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.667474 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.668360 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.669410 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.670497 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.671631 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.672291 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.673560 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.674267 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.675355 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.676977 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.678603 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.680498 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.682124 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.683745 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.684708 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.687532 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.688963 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.691853 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.693251 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.694810 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.697516 642 storage/allocator_test.go:5373 s12 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:36.699691 642 storage/allocator_test.go:5373 s16 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:36.701638 642 storage/allocator_test.go:5373 s16 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:36.745826 642 storage/allocator_test.go:5373 s13 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.747129 642 storage/allocator_test.go:5373 s15 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.749461 642 storage/allocator_test.go:5373 s1 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:36.753216 642 storage/allocator_test.go:5373 s14 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.754431 642 storage/allocator_test.go:5373 s14 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.756275 642 storage/allocator_test.go:5373 s13 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:36.764346 642 storage/allocator_test.go:5373 s0 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.766247 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.770727 642 storage/allocator_test.go:5373 s15 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:36.775547 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.777254 642 storage/allocator_test.go:5373 s4 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.780121 642 storage/allocator_test.go:5373 s12 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:36.784992 642 storage/allocator_test.go:5373 s17 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.786628 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.789547 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.796816 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.799473 642 storage/allocator_test.go:5373 s17 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.806880 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:36.810285 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:36.813234 642 storage/allocator_test.go:5373 s5 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.814784 642 storage/allocator_test.go:5373 s9 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.815860 642 storage/allocator_test.go:5373 s8 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.817166 642 storage/allocator_test.go:5373 s6 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:36.817716 642 storage/allocator_test.go:5373 s4 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:36.818958 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:36.821531 642 storage/allocator_test.go:5373 s6 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.822904 642 storage/allocator_test.go:5373 s14 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.823886 642 storage/allocator_test.go:5373 s8 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.825116 642 storage/allocator_test.go:5373 s4 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:36.825723 642 storage/allocator_test.go:5373 s14 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:36.826774 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.831553 642 storage/allocator_test.go:5373 s1 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.833938 642 storage/allocator_test.go:5373 s19 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:36.835358 642 storage/allocator_test.go:5373 s1 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.838334 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.839468 642 storage/allocator_test.go:5373 s19 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:36.842155 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.843995 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.844911 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.845788 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.846691 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.847554 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.848797 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.850009 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.852270 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.853430 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.854670 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.859271 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.866263 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:36.867144 642 storage/allocator_test.go:5373 s10 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:36.867888 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:36.868560 642 storage/allocator_test.go:5373 s10 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:36.869376 642 storage/allocator_test.go:5373 s10 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:36.870230 642 storage/allocator_test.go:5373 s7 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:36.871049 642 storage/allocator_test.go:5373 s7 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:36.872254 642 storage/allocator_test.go:5373 s10 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:36.872982 642 storage/allocator_test.go:5373 s10 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:36.874029 642 storage/allocator_test.go:5373 s7 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:36.874747 642 storage/allocator_test.go:5373 s10 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:36.875451 642 storage/allocator_test.go:5373 s7 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:36.876304 642 storage/allocator_test.go:5373 s10 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:36.877022 642 storage/allocator_test.go:5373 s10 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:36.878082 642 storage/allocator_test.go:5373 s7 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:36.878841 642 storage/allocator_test.go:5373 s7 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:36.879679 642 storage/allocator_test.go:5373 s10 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:36.880367 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:36.881343 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.882218 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.883093 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.885503 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.886483 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.887489 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.888901 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.889781 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.891016 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.891899 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.892768 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.893796 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.894687 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.895922 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.896794 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.897825 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.898710 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.900230 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.901727 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.903102 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.904663 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.906085 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.907276 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.908848 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.909772 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.911194 642 storage/allocator_test.go:5373 s16 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:36.912457 642 storage/allocator_test.go:5373 s16 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:36.913515 642 storage/allocator_test.go:5373 s16 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:36.914407 642 storage/allocator_test.go:5373 s16 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:36.915944 642 storage/allocator_test.go:5373 s0 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:36.917537 642 storage/allocator_test.go:5373 s12 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:36.918811 642 storage/allocator_test.go:5373 s17 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:36.920012 642 storage/allocator_test.go:5373 s17 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:36.920870 642 storage/allocator_test.go:5373 s0 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:36.921686 642 storage/allocator_test.go:5373 s12 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:36.922469 642 storage/allocator_test.go:5373 s17 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:36.924472 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.926265 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.928306 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.930763 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.931890 642 storage/allocator_test.go:5373 s12 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:36.933306 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.934134 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.934896 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.935758 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.936387 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.937346 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.938553 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.940006 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.942769 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.944558 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.945450 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.946851 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.948256 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.949655 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.951755 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.955136 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.959059 642 storage/allocator_test.go:5373 s16 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:36.960229 642 storage/allocator_test.go:5373 s16 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:36.961013 642 storage/allocator_test.go:5373 s16 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:36.962520 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.963457 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.964396 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.965318 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.966233 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.967479 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.968753 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.969979 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.971504 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.972778 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.973674 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.974550 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.976168 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.977089 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.978018 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.978931 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.979864 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.981859 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.984714 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.985732 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.986835 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.987948 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.988749 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.990095 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.992475 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.993455 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.994541 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.995704 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.996758 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:36.998490 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.000219 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.001778 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.003965 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.020860 642 storage/allocator_test.go:5373 s1 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:37.023827 642 storage/allocator_test.go:5373 s1 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.033101 642 storage/allocator_test.go:5373 s11 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:37.035558 642 storage/allocator_test.go:5373 s1 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.043655 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:37.047649 642 storage/allocator_test.go:5373 s13 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.052852 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:37.055243 642 storage/allocator_test.go:5373 s19 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.060301 642 storage/allocator_test.go:5373 s15 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:37.062722 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.074647 642 storage/allocator_test.go:5373 s13 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:37.079738 642 storage/allocator_test.go:5373 s15 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.083251 642 storage/allocator_test.go:5373 s19 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.087701 642 storage/allocator_test.go:5373 s17 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:37.090496 642 storage/allocator_test.go:5373 s14 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.092564 642 storage/allocator_test.go:5373 s3 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.097148 642 storage/allocator_test.go:5373 s4 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:37.100105 642 storage/allocator_test.go:5373 s6 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.102165 642 storage/allocator_test.go:5373 s18 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.110270 642 storage/allocator_test.go:5373 s19 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:37.115268 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.119742 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.122915 642 storage/allocator_test.go:5373 s2 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:37.124648 642 storage/allocator_test.go:5373 s9 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.125716 642 storage/allocator_test.go:5373 s11 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:37.130661 642 storage/allocator_test.go:5373 s16 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:37.131316 642 storage/allocator_test.go:5373 s10 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.132083 642 storage/allocator_test.go:5373 s10 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:37.132869 642 storage/allocator_test.go:5373 s10 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:37.133653 642 storage/allocator_test.go:5373 s10 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:37.134398 642 storage/allocator_test.go:5373 s10 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:37.135009 642 storage/allocator_test.go:5373 s16 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:37.135545 642 storage/allocator_test.go:5373 s10 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:37.136100 642 storage/allocator_test.go:5373 s16 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:37.137152 642 storage/allocator_test.go:5373 s10 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:37.137783 642 storage/allocator_test.go:5373 s10 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:37.138296 642 storage/allocator_test.go:5373 s10 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:37.138873 642 storage/allocator_test.go:5373 s16 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:37.139499 642 storage/allocator_test.go:5373 s10 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:37.140671 642 storage/allocator_test.go:5373 s16 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:37.143722 642 storage/allocator_test.go:5373 s16 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:37.144710 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.146096 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.146910 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.147908 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.148882 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.149871 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.150704 642 storage/allocator_test.go:5373 s16 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:37.151717 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.152700 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.153688 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.154687 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.155369 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.156043 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.156761 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.157410 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.158322 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.159106 642 storage/allocator_test.go:5373 s16 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:37.159976 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.161521 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.162844 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.164136 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.170895 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.172945 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.174462 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.175949 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.177339 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.178771 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.180857 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.182125 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.183465 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.184815 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.186317 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.188628 642 storage/allocator_test.go:5373 s0 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:37.195004 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:37.197761 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.203944 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:37.207015 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.276556 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.282709 642 storage/allocator_test.go:5373 s6 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.290625 642 storage/allocator_test.go:5373 s6 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.311379 642 storage/allocator_test.go:5373 s18 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.320040 642 storage/allocator_test.go:5373 s18 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.328473 642 storage/allocator_test.go:5373 s18 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.343914 642 storage/allocator_test.go:5373 s17 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.354391 642 storage/allocator_test.go:5373 s17 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:37.355752 642 storage/allocator_test.go:5373 s17 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.363320 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.364770 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.376570 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.378530 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.386114 642 storage/allocator_test.go:5373 s12 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:37.387462 642 storage/allocator_test.go:5373 s11 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:37.394223 642 storage/allocator_test.go:5373 s0 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.395738 642 storage/allocator_test.go:5373 s0 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:37.397529 642 storage/allocator_test.go:5373 s0 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:37.398219 642 storage/allocator_test.go:5373 s0 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:37.399812 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.400265 642 storage/allocator_test.go:5373 s0 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:37.401096 642 storage/allocator_test.go:5373 s0 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:37.401926 642 storage/allocator_test.go:5373 s0 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:37.402677 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.404666 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.406935 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.407842 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.409819 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.410710 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.412008 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.426275 642 storage/allocator_test.go:5373 s1 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.437268 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.445800 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.461750 642 storage/allocator_test.go:5373 s8 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.463042 642 storage/allocator_test.go:5373 s18 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.471509 642 storage/allocator_test.go:5373 s15 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.472823 642 storage/allocator_test.go:5373 s13 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.481700 642 storage/allocator_test.go:5373 s4 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.483724 642 storage/allocator_test.go:5373 s18 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.496470 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.497521 642 storage/allocator_test.go:5373 s14 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.503354 642 storage/allocator_test.go:5373 s10 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:37.511268 642 storage/allocator_test.go:5373 s0 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:37.512096 642 storage/allocator_test.go:5373 s0 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:37.513733 642 storage/allocator_test.go:5373 s0 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:37.514890 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:37.521662 642 storage/allocator_test.go:5373 s0 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:37.522451 642 storage/allocator_test.go:5373 s0 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:37.523317 642 storage/allocator_test.go:5373 s0 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:37.525007 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.527100 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.529254 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.530240 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.532025 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.533374 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.535052 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.536056 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.536992 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.538770 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.555091 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.563538 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.573205 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.587333 642 storage/allocator_test.go:5373 s6 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.593305 642 storage/allocator_test.go:5373 s12 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.603194 642 storage/allocator_test.go:5373 s11 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.618196 642 storage/allocator_test.go:5373 s17 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.634022 642 storage/allocator_test.go:5373 s0 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.635450 642 storage/allocator_test.go:5373 s0 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:37.637717 642 storage/allocator_test.go:5373 s0 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:37.638426 642 storage/allocator_test.go:5373 s0 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:37.640860 642 storage/allocator_test.go:5373 s0 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:37.641666 642 storage/allocator_test.go:5373 s0 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:37.645281 642 storage/allocator_test.go:5373 s0 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:37.647940 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.648948 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.652809 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.654147 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.672196 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.678982 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.685684 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.704862 642 storage/allocator_test.go:5373 s8 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.705841 642 storage/allocator_test.go:5373 s14 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.711321 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.712153 642 storage/allocator_test.go:5373 s15 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.717951 642 storage/allocator_test.go:5373 s15 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.720260 642 storage/allocator_test.go:5369 s15 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.732445 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.734261 642 storage/allocator_test.go:5373 s10 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.738301 642 storage/allocator_test.go:5373 s4 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:37.746384 642 storage/allocator_test.go:5373 s2 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:37.747272 642 storage/allocator_test.go:5373 s0 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:37.748489 642 storage/allocator_test.go:5373 s0 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:37.749326 642 storage/allocator_test.go:5373 s2 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:37.750556 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:37.751869 642 storage/allocator_test.go:5373 s0 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:37.753048 642 storage/allocator_test.go:5373 s0 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:37.753869 642 storage/allocator_test.go:5373 s2 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:37.756769 642 storage/allocator_test.go:5373 s0 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:37.758526 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.759346 642 storage/allocator_test.go:5373 s2 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:37.760858 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.761789 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.763280 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.764683 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.765544 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.766457 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.789750 642 storage/allocator_test.go:5373 s12 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.791029 642 storage/allocator_test.go:5373 s1 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.798377 642 storage/allocator_test.go:5373 s1 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.799696 642 storage/allocator_test.go:5373 s6 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.807138 642 storage/allocator_test.go:5373 s12 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.808836 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.820481 642 storage/allocator_test.go:5373 s13 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.821204 642 storage/allocator_test.go:5373 s9 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:37.822442 642 storage/allocator_test.go:5373 s19 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.827125 642 storage/allocator_test.go:5373 s13 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:37.830521 642 storage/allocator_test.go:5373 s19 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.831387 642 storage/allocator_test.go:5373 s11 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:37.832405 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.836527 642 storage/allocator_test.go:5373 s3 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:37.840101 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.841465 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.842730 642 storage/allocator_test.go:5373 s6 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.850024 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.855470 642 storage/allocator_test.go:5373 s17 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.856295 642 storage/allocator_test.go:5373 s16 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:37.857091 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.860514 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.861192 642 storage/allocator_test.go:5373 s17 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:37.863139 642 storage/allocator_test.go:5373 s18 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:37.874120 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.874780 642 storage/allocator_test.go:5373 s1 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:37.882890 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.883698 642 storage/allocator_test.go:5373 s2 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:37.896686 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.897999 642 storage/allocator_test.go:5373 s2 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:37.906144 642 storage/allocator_test.go:5373 s1 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.907017 642 storage/allocator_test.go:5373 s12 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:37.915983 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.921187 642 storage/allocator_test.go:5373 s0 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:37.936143 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.937460 642 storage/allocator_test.go:5373 s2 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:37.946915 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.948036 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.956122 642 storage/allocator_test.go:5373 s5 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.957186 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:37.977925 642 storage/allocator_test.go:5373 s12 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.980168 642 storage/allocator_test.go:5373 s1 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:37.983278 642 storage/allocator_test.go:5373 s4 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.987217 642 storage/allocator_test.go:5373 s8 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:37.987926 642 storage/allocator_test.go:5373 s4 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:37.994527 642 storage/allocator_test.go:5373 s16 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:37.995083 642 storage/allocator_test.go:5373 s16 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:37.995648 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:37.996180 642 storage/allocator_test.go:5373 s16 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:37.996805 642 storage/allocator_test.go:5373 s16 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:37.997334 642 storage/allocator_test.go:5373 s16 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:37.997885 642 storage/allocator_test.go:5373 s16 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:37.998485 642 storage/allocator_test.go:5373 s16 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:37.999231 642 storage/allocator_test.go:5373 s16 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:37.999962 642 storage/allocator_test.go:5373 s16 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:38.000859 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.001741 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.002636 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.003390 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.004087 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.004907 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.006100 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.006690 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.007326 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.007994 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.008715 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.009436 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.010395 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.011220 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.012282 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.013000 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.013666 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.014367 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.015042 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.015968 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.016701 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.017437 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.018324 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.019116 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.020465 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.021413 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.022342 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.023514 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.028722 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.030960 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.032648 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.033890 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.035212 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.036272 642 storage/allocator_test.go:5373 s17 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:38.037569 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.038894 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.040017 642 storage/allocator_test.go:5373 s17 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:38.041078 642 storage/allocator_test.go:5373 s17 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:38.041963 642 storage/allocator_test.go:5373 s17 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.042994 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.044043 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.045027 642 storage/allocator_test.go:5373 s17 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:38.047248 642 storage/allocator_test.go:5373 s7 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:38.050828 642 storage/allocator_test.go:5373 s7 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:38.055794 642 storage/allocator_test.go:5373 s15 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.058686 642 storage/allocator_test.go:5373 s10 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:38.062008 642 storage/allocator_test.go:5373 s15 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.066298 642 storage/allocator_test.go:5373 s17 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:38.085430 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.090448 642 storage/allocator_test.go:5373 s5 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:38.092065 642 storage/allocator_test.go:5373 s5 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:38.095691 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.096687 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.100211 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.101939 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.105469 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.106420 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.111658 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.114327 642 storage/allocator_test.go:5373 s14 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:38.120314 642 storage/allocator_test.go:5373 s14 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:38.145408 642 storage/allocator_test.go:5373 s4 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.151834 642 storage/allocator_test.go:5373 s4 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.167713 642 storage/allocator_test.go:5373 s4 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.178665 642 storage/allocator_test.go:5373 s7 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.196702 642 storage/allocator_test.go:5373 s4 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.210854 642 storage/allocator_test.go:5373 s16 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.221376 642 storage/allocator_test.go:5373 s7 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.228967 642 storage/allocator_test.go:5373 s7 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.242802 642 storage/allocator_test.go:5373 s10 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.253418 642 storage/allocator_test.go:5373 s10 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.261298 642 storage/allocator_test.go:5373 s14 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:38.261988 642 storage/allocator_test.go:5373 s14 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:38.262643 642 storage/allocator_test.go:5373 s14 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:38.263262 642 storage/allocator_test.go:5373 s14 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:38.263806 642 storage/allocator_test.go:5373 s14 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:38.264274 642 storage/allocator_test.go:5373 s14 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:38.264840 642 storage/allocator_test.go:5373 s14 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:38.265291 642 storage/allocator_test.go:5373 s14 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:38.265813 642 storage/allocator_test.go:5373 s14 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:38.266384 642 storage/allocator_test.go:5373 s14 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:38.267136 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.267787 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.268829 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.269947 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.271212 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.272119 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.272950 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.273726 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.274375 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.275125 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.275807 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.276670 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.277314 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.277987 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.278750 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.279346 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.279953 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.280736 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.281553 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.282562 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.283320 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.284173 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.284859 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.286564 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.287839 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.288920 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.289780 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.290731 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.291734 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.293168 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.294507 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.295876 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.296977 642 storage/allocator_test.go:5373 s1 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:38.297990 642 storage/allocator_test.go:5373 s3 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:38.299049 642 storage/allocator_test.go:5373 s0 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:38.300385 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.301758 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.303054 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.304801 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.306315 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.307826 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.309180 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.311387 642 storage/allocator_test.go:5373 s6 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:38.320009 642 storage/allocator_test.go:5373 s2 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:38.322020 642 storage/allocator_test.go:5373 s6 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.327877 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.329367 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.341069 642 storage/allocator_test.go:5373 s15 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:38.346672 642 storage/allocator_test.go:5373 s15 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.353553 642 storage/allocator_test.go:5373 s8 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:38.355636 642 storage/allocator_test.go:5373 s8 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:38.357530 642 storage/allocator_test.go:5373 s9 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.358381 642 storage/allocator_test.go:5373 s9 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:38.363095 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.365204 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.367368 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.368430 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.375497 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.379250 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.382773 642 storage/allocator_test.go:5373 s11 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.384003 642 storage/allocator_test.go:5373 s12 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:38.394058 642 storage/allocator_test.go:5373 s18 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:38.395540 642 storage/allocator_test.go:5373 s18 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:38.396348 642 storage/allocator_test.go:5373 s12 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:38.397315 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.398771 642 storage/allocator_test.go:5373 s16 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.399406 642 storage/allocator_test.go:5373 s16 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:38.409902 642 storage/allocator_test.go:5373 s4 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.411351 642 storage/allocator_test.go:5373 s6 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.412252 642 storage/allocator_test.go:5373 s15 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:38.416497 642 storage/allocator_test.go:5373 s14 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.417873 642 storage/allocator_test.go:5373 s14 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.418726 642 storage/allocator_test.go:5373 s15 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:38.427301 642 storage/allocator_test.go:5373 s14 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.433913 642 storage/allocator_test.go:5373 s15 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.435708 642 storage/allocator_test.go:5373 s6 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:38.441048 642 storage/allocator_test.go:5373 s10 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.442405 642 storage/allocator_test.go:5373 s5 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.443291 642 storage/allocator_test.go:5373 s5 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:38.447872 642 storage/allocator_test.go:5373 s6 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.449824 642 storage/allocator_test.go:5373 s5 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.451050 642 storage/allocator_test.go:5373 s6 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:38.458428 642 storage/allocator_test.go:5373 s10 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.461567 642 storage/allocator_test.go:5373 s9 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.463226 642 storage/allocator_test.go:5373 s2 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:38.465818 642 storage/allocator_test.go:5373 s10 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:38.467541 642 storage/allocator_test.go:5373 s1 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:38.470835 642 storage/allocator_test.go:5373 s1 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.473998 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.474930 642 storage/allocator_test.go:5373 s4 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:38.476893 642 storage/allocator_test.go:5373 s4 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:38.477564 642 storage/allocator_test.go:5373 s9 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:38.479545 642 storage/allocator_test.go:5373 s7 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.481122 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.482102 642 storage/allocator_test.go:5373 s3 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:38.486062 642 storage/allocator_test.go:5373 s8 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:38.487003 642 storage/allocator_test.go:5373 s8 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:38.490898 642 storage/allocator_test.go:5373 s0 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.493534 642 storage/allocator_test.go:5373 s14 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.495174 642 storage/allocator_test.go:5373 s14 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:38.496928 642 storage/allocator_test.go:5373 s0 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:38.497536 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.498436 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.499171 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.499876 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.500786 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.501453 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.502190 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.503148 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.504407 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.505316 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.511878 642 storage/allocator_test.go:5373 s12 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:38.512693 642 storage/allocator_test.go:5373 s18 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:38.515168 642 storage/allocator_test.go:5373 s12 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:38.520174 642 storage/allocator_test.go:5373 s18 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:38.522791 642 storage/allocator_test.go:5373 s18 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:38.523528 642 storage/allocator_test.go:5373 s18 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:38.524311 642 storage/allocator_test.go:5373 s18 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:38.525075 642 storage/allocator_test.go:5373 s18 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:38.525945 642 storage/allocator_test.go:5373 s12 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:38.526794 642 storage/allocator_test.go:5373 s12 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:38.527669 642 storage/allocator_test.go:5373 s12 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:38.528371 642 storage/allocator_test.go:5373 s18 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:38.529527 642 storage/allocator_test.go:5373 s12 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:38.530391 642 storage/allocator_test.go:5373 s18 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.531171 642 storage/allocator_test.go:5373 s12 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:38.532255 642 storage/allocator_test.go:5373 s18 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:38.533524 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.534534 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.535467 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.536500 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.537422 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.538428 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.539236 642 storage/allocator_test.go:5373 s12 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:38.540153 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.541078 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.542088 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.542953 642 storage/allocator_test.go:5373 s12 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:38.544013 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.544923 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.546195 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.547259 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.548191 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.549449 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.550740 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.551778 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.553206 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.554719 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.556051 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.557429 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.558741 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.559764 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.560545 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.561697 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.563045 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.564428 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.565717 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.567516 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.568823 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.569961 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.571956 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.573724 642 storage/allocator_test.go:5373 s5 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:38.574705 642 storage/allocator_test.go:5373 s16 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:38.575305 642 storage/allocator_test.go:5373 s16 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:38.575958 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:38.576504 642 storage/allocator_test.go:5373 s5 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:38.577144 642 storage/allocator_test.go:5373 s16 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:38.579391 642 storage/allocator_test.go:5373 s16 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:38.580669 642 storage/allocator_test.go:5373 s16 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:38.581899 642 storage/allocator_test.go:5373 s5 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:38.583302 642 storage/allocator_test.go:5373 s16 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:38.584035 642 storage/allocator_test.go:5373 s16 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:38.585045 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.585916 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.586883 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.587792 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.589008 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.589995 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.590868 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.591873 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.592725 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.593665 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.594473 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.595493 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.597015 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.598006 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.598946 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.599932 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.600837 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.601810 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.602686 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.603656 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.604462 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.605679 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.606567 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.607446 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.608467 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.609919 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.611540 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.612951 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.614315 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.617048 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.618410 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.619981 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.621439 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.622983 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.624517 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.625823 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.627240 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.628484 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.630363 642 storage/allocator_test.go:5373 s6 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:38.631549 642 storage/allocator_test.go:5373 s6 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:38.633792 642 storage/allocator_test.go:5373 s6 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:38.635101 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.636367 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.637343 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.638467 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.639267 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.640037 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.640951 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.642074 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.643935 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.646686 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.647837 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.648976 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.650186 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.651136 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.652773 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.653693 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.654568 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.655889 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.657134 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.658414 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.659740 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.660980 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.662994 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.665201 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.666571 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.695479 642 storage/allocator_test.go:5373 s7 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.700740 642 storage/allocator_test.go:5373 s3 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.707552 642 storage/allocator_test.go:5373 s1 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.709535 642 storage/allocator_test.go:5373 s1 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.718233 642 storage/allocator_test.go:5373 s9 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.721266 642 storage/allocator_test.go:5373 s9 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.727380 642 storage/allocator_test.go:5373 s11 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.729436 642 storage/allocator_test.go:5373 s9 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.733994 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.736974 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.746314 642 storage/allocator_test.go:5373 s11 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.749552 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.756002 642 storage/allocator_test.go:5373 s8 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:38.757554 642 storage/allocator_test.go:5373 s17 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.759319 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.763230 642 storage/allocator_test.go:5373 s10 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:38.764740 642 storage/allocator_test.go:5373 s7 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.766542 642 storage/allocator_test.go:5373 s7 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.776388 642 storage/allocator_test.go:5373 s3 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:38.778336 642 storage/allocator_test.go:5373 s19 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.781526 642 storage/allocator_test.go:5373 s4 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.787206 642 storage/allocator_test.go:5373 s0 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:38.788115 642 storage/allocator_test.go:5373 s0 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:38.789253 642 storage/allocator_test.go:5373 s13 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.791204 642 storage/allocator_test.go:5373 s15 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.791987 642 storage/allocator_test.go:5373 s4 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:38.797812 642 storage/allocator_test.go:5373 s16 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:38.798395 642 storage/allocator_test.go:5373 s16 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:38.798966 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:38.799524 642 storage/allocator_test.go:5373 s12 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:38.800123 642 storage/allocator_test.go:5373 s12 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:38.800716 642 storage/allocator_test.go:5373 s12 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:38.801195 642 storage/allocator_test.go:5373 s12 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:38.801741 642 storage/allocator_test.go:5373 s16 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:38.802215 642 storage/allocator_test.go:5373 s16 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:38.803199 642 storage/allocator_test.go:5373 s12 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:38.803746 642 storage/allocator_test.go:5373 s12 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:38.804564 642 storage/allocator_test.go:5373 s16 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:38.805326 642 storage/allocator_test.go:5373 s16 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.806040 642 storage/allocator_test.go:5373 s16 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:38.807015 642 storage/allocator_test.go:5373 s16 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:38.807706 642 storage/allocator_test.go:5373 s12 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:38.808445 642 storage/allocator_test.go:5373 s16 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:38.809493 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.810414 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.811406 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.812219 642 storage/allocator_test.go:5373 s12 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:38.812970 642 storage/allocator_test.go:5373 s12 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:38.813645 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.814299 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.815000 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.816891 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.818792 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.819931 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.821734 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.823001 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.825201 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.826516 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.827452 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.828109 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.828773 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.829790 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.830791 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.831791 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.832791 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.834275 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.835283 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.836214 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.837198 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.838838 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.839948 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.841516 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.842423 642 storage/allocator_test.go:5373 s2 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:38.843328 642 storage/allocator_test.go:5373 s9 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:38.844974 642 storage/allocator_test.go:5373 s9 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:38.846956 642 storage/allocator_test.go:5373 s9 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:38.848348 642 storage/allocator_test.go:5373 s9 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:38.850712 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.852297 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.853740 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:38.855506 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.858244 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.859538 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.860920 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.864322 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.923870 642 storage/allocator_test.go:5373 s3 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:38.925188 642 storage/allocator_test.go:5373 s15 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.926554 642 storage/allocator_test.go:5373 s14 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.931416 642 storage/allocator_test.go:5373 s3 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:38.932519 642 storage/allocator_test.go:5373 s14 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.934060 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.941459 642 storage/allocator_test.go:5373 s4 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:38.943743 642 storage/allocator_test.go:5373 s15 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.947313 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.952903 642 storage/allocator_test.go:5373 s4 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:38.954090 642 storage/allocator_test.go:5373 s4 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.956475 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.961885 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.963198 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.965155 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.972222 642 storage/allocator_test.go:5373 s0 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:38.974923 642 storage/allocator_test.go:5373 s0 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:38.978256 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:38.982469 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.984108 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.985275 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.986831 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.989030 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.990709 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.992195 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:38.994224 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.005793 642 storage/allocator_test.go:5373 s2 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:39.009886 642 storage/allocator_test.go:5373 s1 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:39.011925 642 storage/allocator_test.go:5373 s2 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:39.014811 642 storage/allocator_test.go:5373 s8 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.018198 642 storage/allocator_test.go:5373 s10 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:39.019864 642 storage/allocator_test.go:5373 s6 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:39.021382 642 storage/allocator_test.go:5373 s10 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:39.022723 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.024176 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.030090 642 storage/allocator_test.go:5373 s12 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:39.030776 642 storage/allocator_test.go:5373 s12 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:39.031404 642 storage/allocator_test.go:5373 s9 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:39.032082 642 storage/allocator_test.go:5373 s9 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:39.032764 642 storage/allocator_test.go:5373 s9 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:39.033465 642 storage/allocator_test.go:5373 s9 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:39.034262 642 storage/allocator_test.go:5373 s9 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:39.034813 642 storage/allocator_test.go:5373 s9 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:39.035322 642 storage/allocator_test.go:5373 s12 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:39.036473 642 storage/allocator_test.go:5373 s12 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:39.037056 642 storage/allocator_test.go:5373 s9 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:39.037951 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.038567 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.039217 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.039982 642 storage/allocator_test.go:5373 s12 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:39.040632 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.041285 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.042010 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.042696 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.043420 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.044104 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.044797 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.045390 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.046002 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.046668 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.047285 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.053544 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.054517 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.055764 642 storage/allocator_test.go:5373 s12 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:39.056707 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.057677 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.059014 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.059985 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.060951 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.061906 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.063461 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.065007 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.066564 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.068084 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.069445 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.070876 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.072265 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.073675 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.075946 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.084807 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:39.087899 642 storage/allocator_test.go:5373 s5 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:39.088775 642 storage/allocator_test.go:5373 s16 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:39.090017 642 storage/allocator_test.go:5373 s4 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:39.094246 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:39.097331 642 storage/allocator_test.go:5373 s13 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:39.099472 642 storage/allocator_test.go:5373 s4 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:39.100753 642 storage/allocator_test.go:5373 s16 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:39.105813 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.112968 642 storage/allocator_test.go:5373 s16 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:39.113934 642 storage/allocator_test.go:5373 s16 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:39.115756 642 storage/allocator_test.go:5373 s13 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:39.121457 642 storage/allocator_test.go:5373 s11 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:39.122659 642 storage/allocator_test.go:5373 s11 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:39.124268 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.125024 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.125979 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.126829 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.128349 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.130529 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.131509 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.135211 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.135943 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.136841 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.137523 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.138631 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.141176 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.142823 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.146104 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.147321 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.149333 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.150697 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.153118 642 storage/allocator_test.go:5373 s18 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.156374 642 storage/allocator_test.go:5373 s19 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:39.157969 642 storage/allocator_test.go:5373 s19 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:39.161442 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.162857 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.164238 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.165230 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.166120 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.167900 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.182799 642 storage/allocator_test.go:5373 s4 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.189281 642 storage/allocator_test.go:5373 s4 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.204216 642 storage/allocator_test.go:5373 s2 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.214366 642 storage/allocator_test.go:5373 s2 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.223125 642 storage/allocator_test.go:5373 s2 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.239287 642 storage/allocator_test.go:5373 s12 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.248366 642 storage/allocator_test.go:5373 s12 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.258517 642 storage/allocator_test.go:5373 s12 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.270236 642 storage/allocator_test.go:5373 s10 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.275095 642 storage/allocator_test.go:5373 s10 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:39.277919 642 storage/allocator_test.go:5373 s10 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.286714 642 storage/allocator_test.go:5373 s19 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:39.289626 642 storage/allocator_test.go:5373 s19 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:39.290409 642 storage/allocator_test.go:5373 s19 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:39.291260 642 storage/allocator_test.go:5373 s19 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:39.292065 642 storage/allocator_test.go:5373 s19 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:39.292880 642 storage/allocator_test.go:5373 s19 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:39.293670 642 storage/allocator_test.go:5373 s19 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:39.294451 642 storage/allocator_test.go:5373 s19 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.295651 642 storage/allocator_test.go:5373 s19 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:39.301492 642 storage/allocator_test.go:5373 s19 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:39.305155 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.306055 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.307361 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.310122 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.310811 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.311435 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.312191 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.313358 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.316851 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.318232 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.319543 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.320658 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.322118 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.323123 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.324155 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.325308 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.326420 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.327496 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.328545 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.329686 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.330838 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.338157 642 storage/allocator_test.go:5373 s9 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.345214 642 storage/allocator_test.go:5373 s9 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.364356 642 storage/allocator_test.go:5373 s2 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.371822 642 storage/allocator_test.go:5373 s2 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.377348 642 storage/allocator_test.go:5373 s8 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.394769 642 storage/allocator_test.go:5373 s17 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.396360 642 storage/allocator_test.go:5373 s5 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:39.402791 642 storage/allocator_test.go:5373 s17 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.410347 642 storage/allocator_test.go:5373 s19 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:39.411246 642 storage/allocator_test.go:5373 s19 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:39.412083 642 storage/allocator_test.go:5373 s19 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:39.412918 642 storage/allocator_test.go:5373 s19 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:39.414083 642 storage/allocator_test.go:5373 s19 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:39.414926 642 storage/allocator_test.go:5373 s19 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:39.415767 642 storage/allocator_test.go:5373 s19 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:39.416751 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.417739 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.418721 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.419686 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.420676 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.421647 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.422635 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.423654 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.424572 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.425689 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.426644 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.428031 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.429039 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.430015 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.430980 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.432341 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.433316 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.434267 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.435259 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.436249 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.437238 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.438208 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.439233 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.440214 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.441184 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.442180 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.443187 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.445217 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.448850 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.450265 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.451769 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.453166 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.454517 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.456570 642 storage/allocator_test.go:5373 s4 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:39.457862 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.458840 642 storage/allocator_test.go:5373 s0 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:39.459944 642 storage/allocator_test.go:5373 s4 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:39.461262 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.462659 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.464032 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.465101 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.466357 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.467732 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.469120 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.470150 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.471277 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.472319 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.481187 642 storage/allocator_test.go:5373 s6 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:39.482487 642 storage/allocator_test.go:5373 s7 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.490125 642 storage/allocator_test.go:5373 s6 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:39.491463 642 storage/allocator_test.go:5373 s7 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.508447 642 storage/allocator_test.go:5373 s3 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:39.510074 642 storage/allocator_test.go:5373 s3 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.512650 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.513324 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.516511 642 storage/allocator_test.go:5373 s1 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:39.517369 642 storage/allocator_test.go:5373 s1 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.519746 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.520422 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.524119 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.525039 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.529396 642 storage/allocator_test.go:5373 s14 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:39.530311 642 storage/allocator_test.go:5373 s13 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:39.537490 642 storage/allocator_test.go:5373 s15 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:39.539342 642 storage/allocator_test.go:5373 s10 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.540553 642 storage/allocator_test.go:5373 s16 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:39.541380 642 storage/allocator_test.go:5373 s12 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:39.544505 642 storage/allocator_test.go:5373 s11 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:39.545135 642 storage/allocator_test.go:5373 s16 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:39.552069 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.553243 642 storage/allocator_test.go:5373 s18 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.564978 642 storage/allocator_test.go:5373 s7 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.570997 642 storage/allocator_test.go:5373 s19 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.577692 642 storage/allocator_test.go:5373 s5 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:39.582450 642 storage/allocator_test.go:5373 s19 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.593111 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.601374 642 storage/allocator_test.go:5373 s5 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.616397 642 storage/allocator_test.go:5373 s19 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.626688 642 storage/allocator_test.go:5373 s6 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.636284 642 storage/allocator_test.go:5373 s6 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.649355 642 storage/allocator_test.go:5373 s17 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.656696 642 storage/allocator_test.go:5373 s19 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.664252 642 storage/allocator_test.go:5373 s16 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:39.664952 642 storage/allocator_test.go:5373 s7 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:39.665665 642 storage/allocator_test.go:5373 s16 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:39.666915 642 storage/allocator_test.go:5373 s16 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:39.668005 642 storage/allocator_test.go:5373 s17 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:39.668555 642 storage/allocator_test.go:5373 s16 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:39.669191 642 storage/allocator_test.go:5373 s7 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:39.669713 642 storage/allocator_test.go:5373 s16 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:39.670339 642 storage/allocator_test.go:5373 s7 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:39.671471 642 storage/allocator_test.go:5373 s16 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:39.674370 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.674988 642 storage/allocator_test.go:5373 s17 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:39.676267 642 storage/allocator_test.go:5373 s16 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:39.678720 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.679969 642 storage/allocator_test.go:5373 s16 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:39.681154 642 storage/allocator_test.go:5373 s17 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:39.682366 642 storage/allocator_test.go:5373 s16 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:39.683401 642 storage/allocator_test.go:5373 s16 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:39.684420 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.685884 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.687259 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.691373 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.692861 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.694106 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.695776 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.696863 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.697981 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.699090 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.700188 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.701206 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.702150 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.703540 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.710899 642 storage/allocator_test.go:5373 s10 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:39.714777 642 storage/allocator_test.go:5373 s10 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.715502 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.718629 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.722242 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.723359 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.732267 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.738770 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.739719 642 storage/allocator_test.go:5373 s1 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:39.743428 642 storage/allocator_test.go:5373 s8 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:39.744477 642 storage/allocator_test.go:5373 s2 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:39.746328 642 storage/allocator_test.go:5373 s1 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.746917 642 storage/allocator_test.go:5373 s0 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:39.749647 642 storage/allocator_test.go:5373 s3 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:39.750854 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.752559 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.753262 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.758988 642 storage/allocator_test.go:5373 s19 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:39.761066 642 storage/allocator_test.go:5373 s19 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:39.765378 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.766696 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.769494 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.785229 642 storage/allocator_test.go:5373 s16 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:39.786176 642 storage/allocator_test.go:5373 s7 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:39.787080 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:39.793865 642 storage/allocator_test.go:5373 s7 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.794755 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:39.795849 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.796745 642 storage/allocator_test.go:5373 s6 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:39.803666 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.804780 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.806296 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.807790 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.819412 642 storage/allocator_test.go:5373 s10 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.820313 642 storage/allocator_test.go:5373 s13 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:39.821072 642 storage/allocator_test.go:5373 s10 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:39.821884 642 storage/allocator_test.go:5373 s13 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:39.825249 642 storage/allocator_test.go:5373 s10 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:39.831504 642 storage/allocator_test.go:5373 s16 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.832190 642 storage/allocator_test.go:5373 s16 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:39.832860 642 storage/allocator_test.go:5373 s12 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:39.833639 642 storage/allocator_test.go:5373 s11 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:39.835365 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.839127 642 storage/allocator_test.go:5373 s16 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.839772 642 storage/allocator_test.go:5373 s13 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:39.840860 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.841957 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.845211 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.852411 642 storage/allocator_test.go:5373 s5 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.853348 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:39.854087 642 storage/allocator_test.go:5373 s5 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:39.854766 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.856339 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.857452 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.858934 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.860491 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.862130 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.863018 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.863682 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.864252 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.865701 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.870983 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.873320 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.875193 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.876934 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.877968 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.879476 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.880955 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.885163 642 storage/allocator_test.go:5373 s18 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:39.888088 642 storage/allocator_test.go:5373 s11 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:39.891020 642 storage/allocator_test.go:5373 s14 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:39.894009 642 storage/allocator_test.go:5373 s4 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.896727 642 storage/allocator_test.go:5373 s9 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.897973 642 storage/allocator_test.go:5373 s4 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:39.898828 642 storage/allocator_test.go:5373 s6 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:39.899882 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.901386 642 storage/allocator_test.go:5373 s6 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:39.902560 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:39.904264 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.905973 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.907660 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.909132 642 storage/allocator_test.go:5373 s15 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.917831 642 storage/allocator_test.go:5373 s1 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:39.918768 642 storage/allocator_test.go:5373 s19 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:39.919628 642 storage/allocator_test.go:5373 s0 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:39.920496 642 storage/allocator_test.go:5373 s19 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:39.921765 642 storage/allocator_test.go:5373 s0 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:39.922557 642 storage/allocator_test.go:5373 s0 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:39.923406 642 storage/allocator_test.go:5373 s19 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:39.924277 642 storage/allocator_test.go:5373 s19 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:39.925068 642 storage/allocator_test.go:5373 s19 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:39.925829 642 storage/allocator_test.go:5373 s19 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:39.926648 642 storage/allocator_test.go:5373 s2 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:39.927397 642 storage/allocator_test.go:5373 s2 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:39.928156 642 storage/allocator_test.go:5373 s0 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.928830 642 storage/allocator_test.go:5373 s0 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:39.929441 642 storage/allocator_test.go:5373 s1 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:39.936641 642 storage/allocator_test.go:5373 s2 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:39.937560 642 storage/allocator_test.go:5373 s2 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:39.938408 642 storage/allocator_test.go:5373 s19 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:39.939485 642 storage/allocator_test.go:5373 s0 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:39.940927 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.941764 642 storage/allocator_test.go:5373 s0 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:39.942735 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.943741 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.944727 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.945676 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.946638 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.947636 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.948407 642 storage/allocator_test.go:5373 s1 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:39.949414 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.950277 642 storage/allocator_test.go:5373 s1 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:39.954236 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.955723 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.957070 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.958431 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.960493 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.961905 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.963242 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.964648 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.965975 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.967325 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.968729 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.970039 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.971397 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.972823 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.974085 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.976272 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.977458 642 storage/allocator_test.go:5373 s17 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:39.978706 642 storage/allocator_test.go:5373 s17 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:39.980079 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.985249 642 storage/allocator_test.go:5373 s7 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:39.986057 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:39.987406 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.988539 642 storage/allocator_test.go:5373 s7 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:39.989488 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.994754 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.996127 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:39.998684 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.000917 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.002998 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.060953 642 storage/allocator_test.go:5373 s5 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.068093 642 storage/allocator_test.go:5373 s9 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.079505 642 storage/allocator_test.go:5373 s3 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.106822 642 storage/allocator_test.go:5373 s3 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.118224 642 storage/allocator_test.go:5373 s18 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.127287 642 storage/allocator_test.go:5373 s16 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.136819 642 storage/allocator_test.go:5373 s4 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.143252 642 storage/allocator_test.go:5373 s4 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.156711 642 storage/allocator_test.go:5373 s4 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.171284 642 storage/allocator_test.go:5373 s15 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.183324 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:40.184110 642 storage/allocator_test.go:5373 s7 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:40.184885 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.185662 642 storage/allocator_test.go:5373 s7 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:40.186351 642 storage/allocator_test.go:5373 s7 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:40.190187 642 storage/allocator_test.go:5373 s7 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:40.191332 642 storage/allocator_test.go:5373 s7 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:40.192383 642 storage/allocator_test.go:5373 s7 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:40.193104 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:40.193840 642 storage/allocator_test.go:5373 s7 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:40.194430 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.195185 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.196092 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.202652 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.204786 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.206160 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.207120 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.208545 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.210029 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.211447 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.212844 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.214186 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.215540 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.229985 642 storage/allocator_test.go:5373 s14 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.235846 642 storage/allocator_test.go:5373 s18 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.245928 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.256946 642 storage/allocator_test.go:5373 s11 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.263375 642 storage/allocator_test.go:5373 s8 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.271795 642 storage/allocator_test.go:5373 s8 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.285149 642 storage/allocator_test.go:5373 s10 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.286075 642 storage/allocator_test.go:5373 s9 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:40.299866 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.302733 642 storage/allocator_test.go:5373 s7 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:40.303557 642 storage/allocator_test.go:5373 s7 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:40.304384 642 storage/allocator_test.go:5373 s0 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:40.305208 642 storage/allocator_test.go:5373 s7 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:40.306046 642 storage/allocator_test.go:5373 s7 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:40.307259 642 storage/allocator_test.go:5373 s7 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:40.309299 642 storage/allocator_test.go:5373 s7 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:40.311419 642 storage/allocator_test.go:5373 s1 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.314438 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.316799 642 storage/allocator_test.go:5373 s1 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:40.317830 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.318660 642 storage/allocator_test.go:5373 s1 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:40.319670 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.320894 642 storage/allocator_test.go:5373 s0 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:40.323121 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.324258 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.331792 642 storage/allocator_test.go:5373 s2 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:40.346908 642 storage/allocator_test.go:5373 s2 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:40.350480 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.356737 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.359566 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.364726 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.368639 642 storage/allocator_test.go:5373 s3 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.378809 642 storage/allocator_test.go:5373 s13 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:40.379740 642 storage/allocator_test.go:5373 s17 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.382114 642 storage/allocator_test.go:5373 s4 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.388968 642 storage/allocator_test.go:5373 s13 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:40.390336 642 storage/allocator_test.go:5373 s4 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.393092 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.399861 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.401210 642 storage/allocator_test.go:5373 s17 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.405660 642 storage/allocator_test.go:5373 s5 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.416284 642 storage/allocator_test.go:5373 s19 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:40.417153 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.418325 642 storage/allocator_test.go:5373 s16 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:40.419915 642 storage/allocator_test.go:5373 s5 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.422475 642 storage/allocator_test.go:5373 s6 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:40.425288 642 storage/allocator_test.go:5373 s6 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:40.433315 642 storage/allocator_test.go:5373 s7 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:40.433932 642 storage/allocator_test.go:5373 s9 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:40.439160 642 storage/allocator_test.go:5373 s1 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:40.443286 642 storage/allocator_test.go:5373 s11 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:40.444280 642 storage/allocator_test.go:5373 s1 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:40.448809 642 storage/allocator_test.go:5373 s8 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:40.455128 642 storage/allocator_test.go:5373 s2 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:40.456444 642 storage/allocator_test.go:5373 s14 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:40.462180 642 storage/allocator_test.go:5373 s7 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:40.464900 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:40.465953 642 storage/allocator_test.go:5373 s2 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:40.470076 642 storage/allocator_test.go:5373 s2 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:40.471133 642 storage/allocator_test.go:5373 s2 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:40.475826 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:40.482445 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.485480 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.490209 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.496339 642 storage/allocator_test.go:5373 s7 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:40.497675 642 storage/allocator_test.go:5373 s7 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:40.505755 642 storage/allocator_test.go:5373 s9 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:40.510103 642 storage/allocator_test.go:5373 s10 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.511748 642 storage/allocator_test.go:5373 s9 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:40.512622 642 storage/allocator_test.go:5373 s18 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:40.513450 642 storage/allocator_test.go:5373 s11 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.517190 642 storage/allocator_test.go:5373 s8 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:40.519832 642 storage/allocator_test.go:5373 s10 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.521234 642 storage/allocator_test.go:5373 s15 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:40.523776 642 storage/allocator_test.go:5373 s1 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:40.524525 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.527887 642 storage/allocator_test.go:5373 s15 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:40.530633 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.532400 642 storage/allocator_test.go:5373 s14 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:40.534331 642 storage/allocator_test.go:5369 s15 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.536103 642 storage/allocator_test.go:5373 s8 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.541392 642 storage/allocator_test.go:5373 s12 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:40.546093 642 storage/allocator_test.go:5373 s12 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.547739 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.548704 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.549633 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.550474 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.551446 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.552370 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.554689 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.555668 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.563476 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:40.564065 642 storage/allocator_test.go:5373 s16 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:40.564815 642 storage/allocator_test.go:5373 s16 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:40.565251 642 storage/allocator_test.go:5373 s5 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:40.566258 642 storage/allocator_test.go:5373 s16 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.566884 642 storage/allocator_test.go:5373 s16 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:40.567517 642 storage/allocator_test.go:5373 s5 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:40.568097 642 storage/allocator_test.go:5373 s16 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:40.568625 642 storage/allocator_test.go:5373 s16 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:40.569150 642 storage/allocator_test.go:5373 s16 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:40.569648 642 storage/allocator_test.go:5373 s16 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:40.570205 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.570822 642 storage/allocator_test.go:5373 s5 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:40.571870 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.574053 642 storage/allocator_test.go:5373 s5 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:40.574950 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.575520 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:40.576073 642 storage/allocator_test.go:5373 s5 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:40.576780 642 storage/allocator_test.go:5373 s5 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:40.577511 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.579026 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.581801 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.582843 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.583914 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.584841 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.585904 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.586825 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.587749 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.588784 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.590069 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.590956 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.592355 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.593547 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.595635 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.597735 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.598894 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.600869 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.601935 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.603046 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.604445 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.605782 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.607215 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.608626 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.609966 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.611415 642 storage/allocator_test.go:5369 s16 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.613467 642 storage/allocator_test.go:5373 s6 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:40.614648 642 storage/allocator_test.go:5373 s6 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:40.615788 642 storage/allocator_test.go:5373 s6 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:40.618357 642 storage/allocator_test.go:5373 s6 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:40.619136 642 storage/allocator_test.go:5373 s6 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:40.620244 642 storage/allocator_test.go:5373 s6 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:40.620784 642 storage/allocator_test.go:5373 s6 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:40.621456 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.622185 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.622938 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.623929 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.624665 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.625362 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.626508 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.628507 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.629423 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.631018 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.631931 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.632644 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.633252 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.634007 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.635082 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.635932 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.636682 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.637779 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.640986 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.642449 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.645250 642 storage/allocator_test.go:5373 s17 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:40.646480 642 storage/allocator_test.go:5373 s7 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:40.647784 642 storage/allocator_test.go:5373 s13 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:40.649784 642 storage/allocator_test.go:5373 s7 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:40.654212 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.655284 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:40.659630 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.660256 642 storage/allocator_test.go:5373 s2 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:40.661357 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.661979 642 storage/allocator_test.go:5373 s2 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:40.662665 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.663520 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.664673 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.665438 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.668235 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.669160 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.670953 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.671919 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.673070 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.674358 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.676184 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.677501 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.683271 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.684517 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.714664 642 storage/allocator_test.go:5373 s18 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.723328 642 storage/allocator_test.go:5373 s9 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.732073 642 storage/allocator_test.go:5373 s1 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.745466 642 storage/allocator_test.go:5373 s3 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.754647 642 storage/allocator_test.go:5373 s0 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.763943 642 storage/allocator_test.go:5373 s9 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.777411 642 storage/allocator_test.go:5373 s4 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.787638 642 storage/allocator_test.go:5373 s4 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.800568 642 storage/allocator_test.go:5373 s8 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.815962 642 storage/allocator_test.go:5373 s1 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.830004 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:40.830806 642 storage/allocator_test.go:5373 s2 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:40.832364 642 storage/allocator_test.go:5373 s2 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:40.833178 642 storage/allocator_test.go:5373 s2 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:40.833993 642 storage/allocator_test.go:5373 s2 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:40.834780 642 storage/allocator_test.go:5373 s2 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:40.835547 642 storage/allocator_test.go:5373 s2 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:40.838193 642 storage/allocator_test.go:5373 s2 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:40.838958 642 storage/allocator_test.go:5373 s2 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:40.839728 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:40.841879 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.844858 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.846000 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.847175 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.847843 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.850871 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.851760 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.852530 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.855051 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.867706 642 storage/allocator_test.go:5373 s8 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.874118 642 storage/allocator_test.go:5373 s0 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.886871 642 storage/allocator_test.go:5373 s8 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.898343 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.901069 642 storage/allocator_test.go:5373 s19 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:40.905887 642 storage/allocator_test.go:5373 s19 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.908207 642 storage/allocator_test.go:5373 s17 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:40.913981 642 storage/allocator_test.go:5373 s3 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.919001 642 storage/allocator_test.go:5373 s13 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:40.929704 642 storage/allocator_test.go:5373 s12 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.932080 642 storage/allocator_test.go:5373 s18 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:40.935841 642 storage/allocator_test.go:5373 s11 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:40.943433 642 storage/allocator_test.go:5373 s2 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:40.945930 642 storage/allocator_test.go:5373 s2 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:40.946567 642 storage/allocator_test.go:5373 s2 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:40.947230 642 storage/allocator_test.go:5373 s2 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:40.948718 642 storage/allocator_test.go:5373 s2 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:40.949248 642 storage/allocator_test.go:5373 s2 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:40.950041 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.951680 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.954952 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.955846 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.956790 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.959170 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.959827 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.960394 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:40.965713 642 storage/allocator_test.go:5373 s8 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:40.980258 642 storage/allocator_test.go:5373 s8 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.987896 642 storage/allocator_test.go:5373 s8 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:40.997079 642 storage/allocator_test.go:5373 s8 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.014988 642 storage/allocator_test.go:5373 s5 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.023646 642 storage/allocator_test.go:5373 s4 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.032547 642 storage/allocator_test.go:5373 s4 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.045539 642 storage/allocator_test.go:5373 s0 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.050335 642 storage/allocator_test.go:5373 s0 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:41.064429 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:41.065301 642 storage/allocator_test.go:5373 s2 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:41.066915 642 storage/allocator_test.go:5373 s2 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:41.067742 642 storage/allocator_test.go:5373 s2 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.068474 642 storage/allocator_test.go:5373 s2 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:41.069246 642 storage/allocator_test.go:5373 s2 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.069987 642 storage/allocator_test.go:5373 s2 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.070934 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.071887 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.072869 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.073861 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.074807 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.075794 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.076763 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.077688 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.078651 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.079560 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.080474 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.081482 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.082441 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.084205 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.085117 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.086065 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.086982 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.087921 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.088823 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.089735 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.090669 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.091551 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.092459 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.093335 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.094067 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.094946 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.095862 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.096664 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.097260 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.098015 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.098979 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.102239 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.103822 642 storage/allocator_test.go:5373 s3 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.119417 642 storage/allocator_test.go:5373 s6 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:41.120330 642 storage/allocator_test.go:5373 s1 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.128215 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:41.129449 642 storage/allocator_test.go:5373 s3 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.137122 642 storage/allocator_test.go:5373 s1 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:41.139336 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.154651 642 storage/allocator_test.go:5373 s12 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:41.156204 642 storage/allocator_test.go:5373 s18 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:41.157376 642 storage/allocator_test.go:5373 s10 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.160746 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.163836 642 storage/allocator_test.go:5373 s11 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:41.164418 642 storage/allocator_test.go:5373 s14 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:41.165158 642 storage/allocator_test.go:5373 s18 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.167160 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.172994 642 storage/allocator_test.go:5373 s15 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:41.174286 642 storage/allocator_test.go:5373 s9 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:41.175542 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.179840 642 storage/allocator_test.go:5373 s10 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.186777 642 storage/allocator_test.go:5369 s18 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.187671 642 storage/allocator_test.go:5373 s13 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:41.188422 642 storage/allocator_test.go:5373 s17 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.190631 642 storage/allocator_test.go:5373 s17 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.192331 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.193486 642 storage/allocator_test.go:5373 s13 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:41.196232 642 storage/allocator_test.go:5369 s17 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.205147 642 storage/allocator_test.go:5373 s0 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:41.205897 642 storage/allocator_test.go:5373 s1 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:41.206982 642 storage/allocator_test.go:5373 s0 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:41.208108 642 storage/allocator_test.go:5373 s1 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.208869 642 storage/allocator_test.go:5373 s0 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:41.210429 642 storage/allocator_test.go:5373 s2 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:41.211478 642 storage/allocator_test.go:5373 s1 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:41.212727 642 storage/allocator_test.go:5373 s1 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:41.213522 642 storage/allocator_test.go:5373 s0 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.214258 642 storage/allocator_test.go:5373 s1 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:41.215000 642 storage/allocator_test.go:5373 s0 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:41.215740 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:41.216486 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.217157 642 storage/allocator_test.go:5373 s2 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:41.217879 642 storage/allocator_test.go:5373 s2 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:41.219568 642 storage/allocator_test.go:5373 s2 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:41.220233 642 storage/allocator_test.go:5373 s0 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:41.221133 642 storage/allocator_test.go:5373 s0 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:41.222178 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.222910 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.223682 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.224225 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.224844 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.225513 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.226173 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.226802 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.227474 642 storage/allocator_test.go:5373 s2 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:41.228439 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.229418 642 storage/allocator_test.go:5369 s0 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.230150 642 storage/allocator_test.go:5373 s2 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:41.232825 642 storage/allocator_test.go:5373 s7 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:41.239081 642 storage/allocator_test.go:5373 s4 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:41.241191 642 storage/allocator_test.go:5373 s4 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.244201 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.245360 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.246861 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.247421 642 storage/allocator_test.go:5373 s6 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:41.248262 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.251455 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.252983 642 storage/allocator_test.go:5373 s6 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.254955 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.255952 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.257384 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.258352 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.259342 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.266165 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.271481 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.273352 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.274164 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.275054 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.276762 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.277912 642 storage/allocator_test.go:5369 s4 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.283387 642 storage/allocator_test.go:5373 s8 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:41.285410 642 storage/allocator_test.go:5373 s7 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.288306 642 storage/allocator_test.go:5373 s5 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:41.289252 642 storage/allocator_test.go:5373 s7 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:41.290680 642 storage/allocator_test.go:5373 s8 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:41.291317 642 storage/allocator_test.go:5373 s7 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:41.292023 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.292825 642 storage/allocator_test.go:5373 s5 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:41.293755 642 storage/allocator_test.go:5373 s5 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:41.295546 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.297070 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.298850 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.299696 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.300466 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.301129 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.301786 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.302688 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.303783 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.305539 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.307059 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.309039 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.310090 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.311356 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.312788 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.314098 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.316203 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.318201 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.344705 642 storage/allocator_test.go:5373 s3 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.346346 642 storage/allocator_test.go:5373 s3 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.347672 642 storage/allocator_test.go:5373 s16 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.353809 642 storage/allocator_test.go:5373 s4 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.355418 642 storage/allocator_test.go:5373 s10 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.356682 642 storage/allocator_test.go:5373 s13 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.363873 642 storage/allocator_test.go:5373 s13 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.366896 642 storage/allocator_test.go:5373 s13 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.369864 642 storage/allocator_test.go:5373 s13 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.380548 642 storage/allocator_test.go:5373 s16 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.382080 642 storage/allocator_test.go:5373 s19 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.382882 642 storage/allocator_test.go:5373 s6 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.387411 642 storage/allocator_test.go:5373 s6 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.390958 642 storage/allocator_test.go:5373 s19 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.392061 642 storage/allocator_test.go:5373 s19 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.392973 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.394724 642 storage/allocator_test.go:5373 s6 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.397916 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.400105 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.401750 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.407024 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.412385 642 storage/allocator_test.go:5373 s14 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.413956 642 storage/allocator_test.go:5373 s4 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.414805 642 storage/allocator_test.go:5373 s9 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:41.415671 642 storage/allocator_test.go:5373 s10 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.419363 642 storage/allocator_test.go:5373 s14 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.422258 642 storage/allocator_test.go:5373 s15 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:41.424068 642 storage/allocator_test.go:5373 s3 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.425520 642 storage/allocator_test.go:5373 s9 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.426323 642 storage/allocator_test.go:5373 s18 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:41.427101 642 storage/allocator_test.go:5373 s12 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.429316 642 storage/allocator_test.go:5373 s10 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.431352 642 storage/allocator_test.go:5373 s12 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:41.433762 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.436317 642 storage/allocator_test.go:5369 s10 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.438152 642 storage/allocator_test.go:5369 s14 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.439168 642 storage/allocator_test.go:5373 s18 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.442695 642 storage/allocator_test.go:5369 s12 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.446911 642 storage/allocator_test.go:5373 s11 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:41.448271 642 storage/allocator_test.go:5373 s1 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.449269 642 storage/allocator_test.go:5373 s11 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.449847 642 storage/allocator_test.go:5373 s1 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:41.450502 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.451367 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.451993 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.452836 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.453472 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.454081 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.455345 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.456016 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.461518 642 storage/allocator_test.go:5373 s5 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:41.462177 642 storage/allocator_test.go:5373 s2 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:41.463042 642 storage/allocator_test.go:5373 s7 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:41.463633 642 storage/allocator_test.go:5373 s2 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:41.464267 642 storage/allocator_test.go:5373 s5 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:41.464936 642 storage/allocator_test.go:5373 s2 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.465424 642 storage/allocator_test.go:5373 s5 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:41.465952 642 storage/allocator_test.go:5373 s2 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:41.466432 642 storage/allocator_test.go:5373 s5 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:41.466958 642 storage/allocator_test.go:5373 s2 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:41.469088 642 storage/allocator_test.go:5373 s5 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:41.470225 642 storage/allocator_test.go:5373 s5 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:41.470926 642 storage/allocator_test.go:5373 s7 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:41.472336 642 storage/allocator_test.go:5373 s7 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:41.472888 642 storage/allocator_test.go:5373 s5 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:41.473297 642 storage/allocator_test.go:5373 s5 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:41.473760 642 storage/allocator_test.go:5373 s2 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:41.474339 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.475140 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.475564 642 storage/allocator_test.go:5373 s7 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:41.476324 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.477007 642 storage/allocator_test.go:5373 s2 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.477489 642 storage/allocator_test.go:5373 s7 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:41.478079 642 storage/allocator_test.go:5373 s2 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:41.478729 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.479342 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.480066 642 storage/allocator_test.go:5373 s7 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:41.480497 642 storage/allocator_test.go:5373 s7 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:41.481064 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.481667 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.482288 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.482854 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.483398 642 storage/allocator_test.go:5369 s2 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.484468 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.486773 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.487731 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.489438 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.491372 642 storage/allocator_test.go:5373 s6 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.492818 642 storage/allocator_test.go:5373 s13 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:41.493946 642 storage/allocator_test.go:5373 s6 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:41.495059 642 storage/allocator_test.go:5373 s6 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:41.496112 642 storage/allocator_test.go:5373 s13 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:41.497830 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.500861 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.502769 642 storage/allocator_test.go:5373 s19 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:41.503416 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.504481 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.505399 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.506234 642 storage/allocator_test.go:5373 s19 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:41.506883 642 storage/allocator_test.go:5373 s19 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:41.507537 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.508315 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.509425 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.511197 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.512657 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.513366 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.514499 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.516636 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.518238 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.519951 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.520896 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.521756 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.522564 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.523856 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.525140 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.526020 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.528475 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.530376 642 storage/allocator_test.go:5369 s6 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.531964 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.533159 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.534179 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.535235 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.536720 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.539213 642 storage/allocator_test.go:5369 s19 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=960 MiB), ranges=60, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.540889 642 storage/allocator_test.go:5373 s13 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:41.541493 642 storage/allocator_test.go:5373 s13 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.543099 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.544053 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.544808 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.545628 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.546236 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.546969 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.547686 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.548559 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.549801 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.550409 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.551262 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.551939 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.552916 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.553502 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.554232 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.555038 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.555845 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.556533 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.557226 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.558157 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.559532 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.560224 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.561611 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.562566 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.564910 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.566102 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.568230 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.570749 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.572922 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.574284 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.575659 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.577696 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.598560 642 storage/allocator_test.go:5373 s10 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.599935 642 storage/allocator_test.go:5373 s10 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.602795 642 storage/allocator_test.go:5373 s8 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.607944 642 storage/allocator_test.go:5373 s8 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.609252 642 storage/allocator_test.go:5373 s8 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.612063 642 storage/allocator_test.go:5373 s8 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.620152 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.622301 642 storage/allocator_test.go:5373 s15 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.627978 642 storage/allocator_test.go:5373 s4 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.633482 642 storage/allocator_test.go:5373 s4 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.634256 642 storage/allocator_test.go:5373 s9 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.634851 642 storage/allocator_test.go:5373 s9 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.636564 642 storage/allocator_test.go:5373 s9 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.640157 642 storage/allocator_test.go:5373 s9 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.641446 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.642299 642 storage/allocator_test.go:5373 s1 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.644308 642 storage/allocator_test.go:5373 s0 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.652955 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.654814 642 storage/allocator_test.go:5373 s1 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.656099 642 storage/allocator_test.go:5373 s1 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.659480 642 storage/allocator_test.go:5373 s12 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.664312 642 storage/allocator_test.go:5373 s0 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.665247 642 storage/allocator_test.go:5373 s11 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.665883 642 storage/allocator_test.go:5373 s16 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.666414 642 storage/allocator_test.go:5373 s11 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:41.668869 642 storage/allocator_test.go:5369 s11 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.673433 642 storage/allocator_test.go:5373 s14 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.674503 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.675277 642 storage/allocator_test.go:5373 s17 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.676097 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.677557 642 storage/allocator_test.go:5369 s1 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.683057 642 storage/allocator_test.go:5373 s10 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.684862 642 storage/allocator_test.go:5373 s16 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.685818 642 storage/allocator_test.go:5373 s15 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.686705 642 storage/allocator_test.go:5373 s3 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:41.689201 642 storage/allocator_test.go:5373 s15 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.693625 642 storage/allocator_test.go:5373 s3 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:41.694837 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.695718 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.696283 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.696887 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.697691 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.698879 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.700146 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.701067 642 storage/allocator_test.go:5369 s3 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.709254 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:41.710027 642 storage/allocator_test.go:5373 s13 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:41.710784 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.711625 642 storage/allocator_test.go:5373 s5 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:41.712357 642 storage/allocator_test.go:5373 s13 accepting snapshot from s4
[08:42:53][Step 2/2] I181018 08:36:41.713433 642 storage/allocator_test.go:5373 s13 accepting snapshot from s6
[08:42:53][Step 2/2] I181018 08:36:41.714959 642 storage/allocator_test.go:5373 s7 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:41.715718 642 storage/allocator_test.go:5373 s13 accepting snapshot from s10
[08:42:53][Step 2/2] I181018 08:36:41.716556 642 storage/allocator_test.go:5373 s5 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:41.717295 642 storage/allocator_test.go:5373 s13 accepting snapshot from s12
[08:42:53][Step 2/2] I181018 08:36:41.719736 642 storage/allocator_test.go:5373 s13 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:41.720350 642 storage/allocator_test.go:5373 s13 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:41.720997 642 storage/allocator_test.go:5373 s7 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:41.721490 642 storage/allocator_test.go:5373 s5 accepting snapshot from s17
[08:42:53][Step 2/2] I181018 08:36:41.722011 642 storage/allocator_test.go:5373 s5 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:41.722495 642 storage/allocator_test.go:5373 s5 accepting snapshot from s19
[08:42:53][Step 2/2] I181018 08:36:41.723058 642 storage/allocator_test.go:5373 s7 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:41.723561 642 storage/allocator_test.go:5373 s5 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:41.724089 642 storage/allocator_test.go:5373 s7 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.724718 642 storage/allocator_test.go:5373 s7 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:41.725353 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.726230 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.727261 642 storage/allocator_test.go:5373 s5 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:41.727943 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.728666 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.729256 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.730119 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.730866 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.731565 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.732245 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s17: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.732879 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s18: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.733513 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s19: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.734701 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.736202 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.737432 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.738778 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.740007 642 storage/allocator_test.go:5369 s5 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.741978 642 storage/allocator_test.go:5369 s7 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.744644 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s9: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.745866 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s10: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.747167 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.748396 642 storage/allocator_test.go:5369 s13 too full to accept snapshot from s12: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.750196 642 storage/allocator_test.go:5373 s8 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:41.751215 642 storage/allocator_test.go:5373 s8 accepting snapshot from s15
[08:42:53][Step 2/2] I181018 08:36:41.752283 642 storage/allocator_test.go:5373 s8 accepting snapshot from s16
[08:42:53][Step 2/2] I181018 08:36:41.755907 642 storage/allocator_test.go:5373 s8 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:41.756985 642 storage/allocator_test.go:5373 s8 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.757767 642 storage/allocator_test.go:5373 s8 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.759229 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.760208 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.761303 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.762043 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.764557 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.765563 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.766535 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.772085 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.773131 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.774388 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.775087 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.778393 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.780032 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.781801 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.784215 642 storage/allocator_test.go:5373 s9 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:41.785932 642 storage/allocator_test.go:5373 s9 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.787770 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.788828 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.793102 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.794455 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.795815 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.796776 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.798449 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.799728 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.800950 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.801908 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.804895 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.806171 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.807466 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.808408 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.809992 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s11: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.811267 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.812511 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s15: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.813459 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s16: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.817653 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.820858 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.823783 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.825657 642 storage/allocator_test.go:5369 s9 too full to accept snapshot from s8: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=944 MiB), ranges=59, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.854340 642 storage/allocator_test.go:5373 s14 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.862390 642 storage/allocator_test.go:5373 s1 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.872816 642 storage/allocator_test.go:5373 s3 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.884233 642 storage/allocator_test.go:5373 s3 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.889751 642 storage/allocator_test.go:5373 s0 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.899656 642 storage/allocator_test.go:5373 s2 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.912930 642 storage/allocator_test.go:5373 s2 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.921217 642 storage/allocator_test.go:5373 s2 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.930819 642 storage/allocator_test.go:5373 s0 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.938042 642 storage/allocator_test.go:5373 s14 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.940658 642 storage/allocator_test.go:5373 s1 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:41.949836 642 storage/allocator_test.go:5373 s8 accepting snapshot from s0
[08:42:53][Step 2/2] I181018 08:36:41.950890 642 storage/allocator_test.go:5373 s8 accepting snapshot from s1
[08:42:53][Step 2/2] I181018 08:36:41.952070 642 storage/allocator_test.go:5373 s8 accepting snapshot from s2
[08:42:53][Step 2/2] I181018 08:36:41.953092 642 storage/allocator_test.go:5373 s8 accepting snapshot from s3
[08:42:53][Step 2/2] I181018 08:36:41.954185 642 storage/allocator_test.go:5373 s8 accepting snapshot from s5
[08:42:53][Step 2/2] I181018 08:36:41.955007 642 storage/allocator_test.go:5373 s8 accepting snapshot from s7
[08:42:53][Step 2/2] I181018 08:36:41.955956 642 storage/allocator_test.go:5373 s8 accepting snapshot from s9
[08:42:53][Step 2/2] I181018 08:36:41.957743 642 storage/allocator_test.go:5373 s8 accepting snapshot from s13
[08:42:53][Step 2/2] I181018 08:36:41.958525 642 storage/allocator_test.go:5373 s8 accepting snapshot from s14
[08:42:53][Step 2/2] I181018 08:36:41.960572 642 storage/allocator_test.go:5373 s8 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:41.961928 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.962863 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.963820 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.964761 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.966156 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.967561 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.970442 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s13: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.971145 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s14: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.973569 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s0: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.974845 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s1: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.975853 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s2: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.976832 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s3: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.978339 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s4: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.979873 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s5: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.981282 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s6: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.982696 642 storage/allocator_test.go:5369 s8 too full to accept snapshot from s7: disk (capacity=1.0 GiB, available=48 MiB, used=976 MiB, logicalBytes=976 MiB), ranges=61, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:36:41.996077 642 storage/allocator_test.go:5373 s9 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:42.009856 642 storage/allocator_test.go:5373 s18 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:42.020698 642 storage/allocator_test.go:5373 s18 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:42.033353 642 storage/allocator_test.go:5373 s17 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:42.037680 642 storage/allocator_test.go:5373 s12 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:42.043499 642 storage/allocator_test.go:5373 s11 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:42.051317 642 storage/allocator_test.go:5373 s6 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:42.059240 642 storage/allocator_test.go:5373 s11 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:42.066918 642 storage/allocator_test.go:5373 s10 accepting snapshot from s18
[08:42:53][Step 2/2] I181018 08:36:42.070734 642 storage/allocator_test.go:5373 s16 accepting snapshot from s8
[08:42:53][Step 2/2] I181018 08:36:42.071922 642 storage/allocator_test.go:5373 s4 accepting snapshot from s11
[08:42:53][Step 2/2] I181018 08:36:42.074465 642 storage/allocator_test.go:5373 s15 accepting snapshot from s18
[08:42:53][Step 2/2] --- PASS: TestAllocatorFullDisks (12.06s)
[08:42:53][Step 2/2] === RUN TestSpanSetBatch
[08:42:53][Step 2/2] --- PASS: TestSpanSetBatch (0.01s)
[08:42:53][Step 2/2] === RUN TestSpanSetMVCCResolveWriteIntentRangeUsingIter
[08:42:53][Step 2/2] --- PASS: TestSpanSetMVCCResolveWriteIntentRangeUsingIter (0.06s)
[08:42:53][Step 2/2] === RUN TestCommandQueue
[08:42:53][Step 2/2] --- PASS: TestCommandQueue (0.01s)
[08:42:53][Step 2/2] === RUN TestCommandQueueWriteWaitForNonAdjacentRead
[08:42:53][Step 2/2] --- PASS: TestCommandQueueWriteWaitForNonAdjacentRead (0.01s)
[08:42:53][Step 2/2] === RUN TestCommandQueueNoWaitOnReadOnly
[08:42:53][Step 2/2] --- PASS: TestCommandQueueNoWaitOnReadOnly (0.02s)
[08:42:53][Step 2/2] === RUN TestCommandQueueMultipleExecutingCommands
[08:42:53][Step 2/2] --- PASS: TestCommandQueueMultipleExecutingCommands (0.01s)
[08:42:53][Step 2/2] === RUN TestCommandQueueMultiplePendingCommands
[08:42:53][Step 2/2] --- PASS: TestCommandQueueMultiplePendingCommands (0.02s)
[08:42:53][Step 2/2] === RUN TestCommandQueueRemove
[08:42:53][Step 2/2] --- PASS: TestCommandQueueRemove (0.01s)
[08:42:53][Step 2/2] === RUN TestCommandQueueExclusiveEnd
[08:42:53][Step 2/2] --- PASS: TestCommandQueueExclusiveEnd (0.01s)
[08:42:53][Step 2/2] === RUN TestCommandQueueSelfOverlap
[08:42:53][Step 2/2] --- PASS: TestCommandQueueSelfOverlap (0.00s)
[08:42:53][Step 2/2] === RUN TestCommandQueueCoveringOptimization
[08:42:53][Step 2/2] --- PASS: TestCommandQueueCoveringOptimization (0.01s)
[08:42:53][Step 2/2] === RUN TestCommandQueueWithoutCoveringOptimization
[08:42:53][Step 2/2] --- PASS: TestCommandQueueWithoutCoveringOptimization (0.00s)
[08:42:53][Step 2/2] === RUN TestCommandQueueIssue6495
[08:42:53][Step 2/2] --- PASS: TestCommandQueueIssue6495 (0.01s)
[08:42:53][Step 2/2] === RUN TestCommandQueueTimestamps
[08:42:53][Step 2/2] --- PASS: TestCommandQueueTimestamps (0.01s)
[08:42:53][Step 2/2] === RUN TestCommandQueueEnclosedRead
[08:42:53][Step 2/2] --- PASS: TestCommandQueueEnclosedRead (0.00s)
[08:42:53][Step 2/2] === RUN TestCommandQueueEnclosedWrite
[08:42:53][Step 2/2] --- PASS: TestCommandQueueEnclosedWrite (0.01s)
[08:42:53][Step 2/2] === RUN TestCommandQueueTimestampsEmpty
[08:42:53][Step 2/2] --- PASS: TestCommandQueueTimestampsEmpty (0.01s)
[08:42:53][Step 2/2] === RUN TestCommandQueueTransitiveDependencies
[08:42:53][Step 2/2] --- PASS: TestCommandQueueTransitiveDependencies (0.16s)
[08:42:53][Step 2/2] === RUN TestCommandQueueGetSnapshotWithReadBuffer
[08:42:53][Step 2/2] --- PASS: TestCommandQueueGetSnapshotWithReadBuffer (0.01s)
[08:42:53][Step 2/2] === RUN TestCommandQueueGetSnapshotWithChildren
[08:42:53][Step 2/2] --- PASS: TestCommandQueueGetSnapshotWithChildren (0.00s)
[08:42:53][Step 2/2] === RUN TestCommandQueueGetSnapshotWithDisappearingPrereq
[08:42:53][Step 2/2] --- PASS: TestCommandQueueGetSnapshotWithDisappearingPrereq (0.01s)
[08:42:53][Step 2/2] === RUN TestEntryCache
[08:42:53][Step 2/2] --- PASS: TestEntryCache (0.01s)
[08:42:53][Step 2/2] === RUN TestEntryCacheClearTo
[08:42:53][Step 2/2] --- PASS: TestEntryCacheClearTo (0.00s)
[08:42:53][Step 2/2] === RUN TestEntryCacheEviction
[08:42:53][Step 2/2] --- PASS: TestEntryCacheEviction (0.01s)
[08:42:53][Step 2/2] === RUN TestGCQueueScoreString
[08:42:53][Step 2/2] --- PASS: TestGCQueueScoreString (0.00s)
[08:42:53][Step 2/2] === RUN TestGCQueueMakeGCScoreInvariantQuick
[08:42:53][Step 2/2] --- PASS: TestGCQueueMakeGCScoreInvariantQuick (2.62s)
[08:42:53][Step 2/2] === RUN TestGCQueueMakeGCScoreAnomalousStats
[08:42:53][Step 2/2] --- PASS: TestGCQueueMakeGCScoreAnomalousStats (0.05s)
[08:42:53][Step 2/2] === RUN TestGCQueueMakeGCScoreRealistic
[08:42:53][Step 2/2] --- PASS: TestGCQueueMakeGCScoreRealistic (0.12s)
[08:42:53][Step 2/2] === RUN TestGCQueueProcess
[08:42:53][Step 2/2] --- PASS: TestGCQueueProcess (0.11s)
[08:42:53][Step 2/2] === RUN TestGCQueueTransactionTable
[08:42:53][Step 2/2] W181018 08:36:45.464104 759 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "intent" ({ID:82364c5e-2c8c-4a6b-afae-4b4a56ade6b3 Isolation:SERIALIZABLE Key:[102] Epoch:0 Timestamp:9223372036.854775807,2147483647 Priority:469306 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] I181018 08:36:45.466072 760 storage/replica_command.go:75 [s1,r1/1:/M{in-ax}] test injecting error: boom
[08:42:53][Step 2/2] W181018 08:36:45.466425 760 storage/intent_resolver.go:838 [s1] failed to cleanup transaction intents: failed to resolve intents: boom
[08:42:53][Step 2/2] --- PASS: TestGCQueueTransactionTable (0.07s)
[08:42:53][Step 2/2] === RUN TestGCQueueIntentResolution
[08:42:53][Step 2/2] --- PASS: TestGCQueueIntentResolution (0.14s)
[08:42:53][Step 2/2] === RUN TestGCQueueLastProcessedTimestamps
[08:42:53][Step 2/2] --- PASS: TestGCQueueLastProcessedTimestamps (0.10s)
[08:42:53][Step 2/2] === RUN TestGCQueueChunkRequests
[08:42:53][Step 2/2] I181018 08:36:49.891921 937 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] storage.raftlog: processing replica
[08:42:53][Step 2/2] E181018 08:36:49.894427 1039 storage/queue.go:791 [raftlog,s1,r1/1:/M{in-ax}] result is ambiguous (server shutdown)
[08:42:53][Step 2/2] --- PASS: TestGCQueueChunkRequests (4.18s)
[08:42:53][Step 2/2] === RUN TestPushTransactionsWithNonPendingIntent
[08:42:53][Step 2/2] --- PASS: TestPushTransactionsWithNonPendingIntent (0.05s)
[08:42:53][Step 2/2] === RUN TestContendedIntent
[08:42:53][Step 2/2] === RUN TestContendedIntent/#00
[08:42:53][Step 2/2] === RUN TestContendedIntent/#01
[08:42:53][Step 2/2] === RUN TestContendedIntent/#02
[08:42:53][Step 2/2] === RUN TestContendedIntent/#03
[08:42:53][Step 2/2] === RUN TestContendedIntent/#04
[08:42:53][Step 2/2] --- PASS: TestContendedIntent (0.05s)
[08:42:53][Step 2/2] --- PASS: TestContendedIntent/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestContendedIntent/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestContendedIntent/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestContendedIntent/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestContendedIntent/#04 (0.00s)
[08:42:53][Step 2/2] === RUN TestContendedIntentWithDependencyCycle
[08:42:53][Step 2/2] --- PASS: TestContendedIntentWithDependencyCycle (0.19s)
[08:42:53][Step 2/2] === RUN TestLeaseHistory
[08:42:53][Step 2/2] --- PASS: TestLeaseHistory (0.01s)
[08:42:53][Step 2/2] === RUN TestMergeQueueShouldQueue
[08:42:53][Step 2/2] === RUN TestMergeQueueShouldQueue/#00
[08:42:53][Step 2/2] === RUN TestMergeQueueShouldQueue/#01
[08:42:53][Step 2/2] === RUN TestMergeQueueShouldQueue/#02
[08:42:53][Step 2/2] === RUN TestMergeQueueShouldQueue/#03
[08:42:53][Step 2/2] === RUN TestMergeQueueShouldQueue/#04
[08:42:53][Step 2/2] === RUN TestMergeQueueShouldQueue/#05
[08:42:53][Step 2/2] === RUN TestMergeQueueShouldQueue/#06
[08:42:53][Step 2/2] === RUN TestMergeQueueShouldQueue/#07
[08:42:53][Step 2/2] === RUN TestMergeQueueShouldQueue/#08
[08:42:53][Step 2/2] === RUN TestMergeQueueShouldQueue/#09
[08:42:53][Step 2/2] === RUN TestMergeQueueShouldQueue/#10
[08:42:53][Step 2/2] --- PASS: TestMergeQueueShouldQueue (0.05s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueueShouldQueue/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueueShouldQueue/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueueShouldQueue/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueueShouldQueue/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueueShouldQueue/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueueShouldQueue/#05 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueueShouldQueue/#06 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueueShouldQueue/#07 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueueShouldQueue/#08 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueueShouldQueue/#09 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueueShouldQueue/#10 (0.00s)
[08:42:53][Step 2/2] === RUN TestShouldReplaceLiveness
[08:42:53][Step 2/2] === RUN TestShouldReplaceLiveness/#00
[08:42:53][Step 2/2] === RUN TestShouldReplaceLiveness/#01
[08:42:53][Step 2/2] === RUN TestShouldReplaceLiveness/#02
[08:42:53][Step 2/2] === RUN TestShouldReplaceLiveness/#03
[08:42:53][Step 2/2] === RUN TestShouldReplaceLiveness/#04
[08:42:53][Step 2/2] === RUN TestShouldReplaceLiveness/#05
[08:42:53][Step 2/2] === RUN TestShouldReplaceLiveness/#06
[08:42:53][Step 2/2] --- PASS: TestShouldReplaceLiveness (0.01s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplaceLiveness/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplaceLiveness/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplaceLiveness/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplaceLiveness/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplaceLiveness/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplaceLiveness/#05 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplaceLiveness/#06 (0.00s)
[08:42:53][Step 2/2] === RUN TestQueuePriorityQueue
[08:42:53][Step 2/2] --- PASS: TestQueuePriorityQueue (0.01s)
[08:42:53][Step 2/2] === RUN TestBaseQueueAddUpdateAndRemove
[08:42:53][Step 2/2] I181018 08:36:50.298066 1547 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:36:50.298895 1547 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestBaseQueueAddUpdateAndRemove (0.05s)
[08:42:53][Step 2/2] === RUN TestBaseQueueAdd
[08:42:53][Step 2/2] --- PASS: TestBaseQueueAdd (0.10s)
[08:42:53][Step 2/2] === RUN TestBaseQueueProcess
[08:42:53][Step 2/2] I181018 08:36:50.440785 1571 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:36:50.441590 1571 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:36:50.444053 1571 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] storage.test: processing replica
[08:42:53][Step 2/2] --- PASS: TestBaseQueueProcess (0.05s)
[08:42:53][Step 2/2] === RUN TestBaseQueueAddRemove
[08:42:53][Step 2/2] --- PASS: TestBaseQueueAddRemove (0.10s)
[08:42:53][Step 2/2] === RUN TestNeedsSystemConfig
[08:42:53][Step 2/2] I181018 08:36:50.591652 1924 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] storage.test: processing replica
[08:42:53][Step 2/2] --- PASS: TestNeedsSystemConfig (0.04s)
[08:42:53][Step 2/2] === RUN TestAcceptsUnsplitRanges
[08:42:53][Step 2/2] I181018 08:36:50.633414 2006 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:36:50.633942 2006 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:36:50.636816 2006 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] storage.test: processing replica
[08:42:53][Step 2/2] --- PASS: TestAcceptsUnsplitRanges (0.04s)
[08:42:53][Step 2/2] === RUN TestBaseQueuePurgatory
[08:42:53][Step 2/2] I181018 08:36:50.674924 1463 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:36:50.675768 1463 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:36:50.705520 2219 storage/queue.go:876 [s1,test] purgatory is now empty
[08:42:53][Step 2/2] --- PASS: TestBaseQueuePurgatory (0.07s)
[08:42:53][Step 2/2] === RUN TestBaseQueueProcessTimeout
[08:42:53][Step 2/2] E181018 08:36:50.748881 2118 storage/queue.go:791 [s1,test,r1/1:/M{in-ax}] context deadline exceeded
[08:42:53][Step 2/2] --- PASS: TestBaseQueueProcessTimeout (0.10s)
[08:42:53][Step 2/2] === RUN TestBaseQueueTimeMetric
[08:42:53][Step 2/2] --- PASS: TestBaseQueueTimeMetric (0.04s)
[08:42:53][Step 2/2] === RUN TestBaseQueueShouldQueueAgain
[08:42:53][Step 2/2] --- PASS: TestBaseQueueShouldQueueAgain (0.01s)
[08:42:53][Step 2/2] === RUN TestBaseQueueDisable
[08:42:53][Step 2/2] --- PASS: TestBaseQueueDisable (0.06s)
[08:42:53][Step 2/2] === RUN TestBaseQueueProcessConcurrently
[08:42:53][Step 2/2] I181018 08:36:50.954966 2324 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:36:50.955659 2324 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:36:50.961558 2324 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 2 [async] storage.test: processing replica
[08:42:53][Step 2/2] --- PASS: TestBaseQueueProcessConcurrently (0.05s)
[08:42:53][Step 2/2] === RUN TestBaseQueueRequeue
[08:42:53][Step 2/2] I181018 08:36:51.005932 2522 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:36:51.006692 2522 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestBaseQueueRequeue (0.05s)
[08:42:53][Step 2/2] === RUN TestQuotaPoolBasic
[08:42:53][Step 2/2] --- PASS: TestQuotaPoolBasic (0.08s)
[08:42:53][Step 2/2] === RUN TestQuotaPoolContextCancellation
[08:42:53][Step 2/2] --- PASS: TestQuotaPoolContextCancellation (0.01s)
[08:42:53][Step 2/2] === RUN TestQuotaPoolClose
[08:42:53][Step 2/2] --- PASS: TestQuotaPoolClose (0.01s)
[08:42:53][Step 2/2] === RUN TestQuotaPoolCanceledAcquisitions
[08:42:53][Step 2/2] --- PASS: TestQuotaPoolCanceledAcquisitions (0.01s)
[08:42:53][Step 2/2] === RUN TestQuotaPoolNoops
[08:42:53][Step 2/2] --- PASS: TestQuotaPoolNoops (0.01s)
[08:42:53][Step 2/2] === RUN TestQuotaPoolMaxQuota
[08:42:53][Step 2/2] --- PASS: TestQuotaPoolMaxQuota (0.03s)
[08:42:53][Step 2/2] === RUN TestQuotaPoolCappedAcquisition
[08:42:53][Step 2/2] --- PASS: TestQuotaPoolCappedAcquisition (0.01s)
[08:42:53][Step 2/2] === RUN TestShouldTruncate
[08:42:53][Step 2/2] === RUN TestShouldTruncate/#00
[08:42:53][Step 2/2] === RUN TestShouldTruncate/#01
[08:42:53][Step 2/2] === RUN TestShouldTruncate/#02
[08:42:53][Step 2/2] === RUN TestShouldTruncate/#03
[08:42:53][Step 2/2] === RUN TestShouldTruncate/#04
[08:42:53][Step 2/2] --- PASS: TestShouldTruncate (0.01s)
[08:42:53][Step 2/2] --- PASS: TestShouldTruncate/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldTruncate/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldTruncate/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldTruncate/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldTruncate/#04 (0.00s)
[08:42:53][Step 2/2] === RUN TestGetQuorumIndex
[08:42:53][Step 2/2] --- PASS: TestGetQuorumIndex (0.00s)
[08:42:53][Step 2/2] === RUN TestComputeTruncatableIndex
[08:42:53][Step 2/2] --- PASS: TestComputeTruncatableIndex (0.01s)
[08:42:53][Step 2/2] === RUN TestGetTruncatableIndexes
[08:42:53][Step 2/2] --- PASS: TestGetTruncatableIndexes (0.30s)
[08:42:53][Step 2/2] === RUN TestProactiveRaftLogTruncate
[08:42:53][Step 2/2] === RUN TestProactiveRaftLogTruncate/#00
[08:42:53][Step 2/2] === RUN TestProactiveRaftLogTruncate/#01
[08:42:53][Step 2/2] I181018 08:36:52.877723 3449 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] storage.raftlog: processing replica
[08:42:53][Step 2/2] --- PASS: TestProactiveRaftLogTruncate (1.39s)
[08:42:53][Step 2/2] --- PASS: TestProactiveRaftLogTruncate/#00 (1.31s)
[08:42:53][Step 2/2] --- PASS: TestProactiveRaftLogTruncate/#01 (0.07s)
[08:42:53][Step 2/2] === RUN TestRaftTransportStartNewQueue
[08:42:53][Step 2/2] I181018 08:36:52.891250 3565 storage/raft_transport_unit_test.go:90 running test with a ctx cancellation of 2.472596ms
[08:42:53][Step 2/2] I181018 08:36:52.894371 3565 rpc/nodedialer/nodedialer.go:91 unable to connect to n1: context deadline exceeded
[08:42:53][Step 2/2] --- PASS: TestRaftTransportStartNewQueue (0.01s)
[08:42:53][Step 2/2] === RUN TestReplicaGCShouldQueue
[08:42:53][Step 2/2] --- PASS: TestReplicaGCShouldQueue (0.01s)
[08:42:53][Step 2/2] === RUN TestReplicaRankings
[08:42:53][Step 2/2] --- PASS: TestReplicaRankings (0.00s)
[08:42:53][Step 2/2] === RUN TestSideloadingSideloadedStorage
[08:42:53][Step 2/2] === RUN TestSideloadingSideloadedStorage/Mem
[08:42:53][Step 2/2] I181018 08:36:52.922417 3666 storage/engine/rocksdb.go:575 opening rocksdb instance at "/tmp/TestSideloadingSideloadedStorage_Mem371023335"
[08:42:53][Step 2/2] I181018 08:36:52.938808 3666 storage/engine/rocksdb.go:708 closing rocksdb instance at "/tmp/TestSideloadingSideloadedStorage_Mem371023335"
[08:42:53][Step 2/2] === RUN TestSideloadingSideloadedStorage/Disk
[08:42:53][Step 2/2] I181018 08:36:52.942456 3668 storage/engine/rocksdb.go:575 opening rocksdb instance at "/tmp/TestSideloadingSideloadedStorage_Disk120179889"
[08:42:53][Step 2/2] I181018 08:36:52.995894 3668 storage/engine/rocksdb.go:708 closing rocksdb instance at "/tmp/TestSideloadingSideloadedStorage_Disk120179889"
[08:42:53][Step 2/2] --- PASS: TestSideloadingSideloadedStorage (0.08s)
[08:42:53][Step 2/2] --- PASS: TestSideloadingSideloadedStorage/Mem (0.02s)
[08:42:53][Step 2/2] --- PASS: TestSideloadingSideloadedStorage/Disk (0.06s)
[08:42:53][Step 2/2] === RUN TestRaftSSTableSideloadingInline
[08:42:53][Step 2/2] --- PASS: TestRaftSSTableSideloadingInline (0.02s)
[08:42:53][Step 2/2] === RUN TestRaftSSTableSideloadingInflight
[08:42:53][Step 2/2] --- PASS: TestRaftSSTableSideloadingInflight (0.01s)
[08:42:53][Step 2/2] === RUN TestRaftSSTableSideloadingSideload
[08:42:53][Step 2/2] === RUN TestRaftSSTableSideloadingSideload/empty
[08:42:53][Step 2/2] === RUN TestRaftSSTableSideloadingSideload/v1
[08:42:53][Step 2/2] === RUN TestRaftSSTableSideloadingSideload/v2
[08:42:53][Step 2/2] W181018 08:36:53.039612 3053 storage/replica_sideload.go:137 encountered sideloaded Raft command without inlined payload
[08:42:53][Step 2/2] === RUN TestRaftSSTableSideloadingSideload/mixed
[08:42:53][Step 2/2] W181018 08:36:53.041394 3659 storage/replica_sideload.go:137 encountered sideloaded Raft command without inlined payload
[08:42:53][Step 2/2] --- PASS: TestRaftSSTableSideloadingSideload (0.02s)
[08:42:53][Step 2/2] --- PASS: TestRaftSSTableSideloadingSideload/empty (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRaftSSTableSideloadingSideload/v1 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRaftSSTableSideloadingSideload/v2 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestRaftSSTableSideloadingSideload/mixed (0.00s)
[08:42:53][Step 2/2] === RUN TestRaftSSTableSideloadingProposal
[08:42:53][Step 2/2] --- PASS: TestRaftSSTableSideloadingProposal (0.04s)
[08:42:53][Step 2/2] === RUN TestRaftSSTableSideloadingSnapshot
[08:42:53][Step 2/2] I181018 08:36:53.091249 1152 storage/engine/rocksdb.go:575 opening rocksdb instance at "/tmp/TestRaftSSTableSideloadingSnapshot847639563"
[08:42:53][Step 2/2] I181018 08:36:53.160534 1152 storage/store_snapshot.go:621 sending testing-will-succeed snapshot e5c2406a at applied index 14
[08:42:53][Step 2/2] I181018 08:36:53.161656 1152 storage/store_snapshot.go:664 streamed snapshot to (n0,s0):?: kv pairs: 10, log entries: 4, rate-limit: 8.0 MiB/sec, 1ms
[08:42:53][Step 2/2] I181018 08:36:53.165470 1152 storage/store_snapshot.go:621 sending testing-will-fail snapshot 34906584 at applied index 14
[08:42:53][Step 2/2] I181018 08:36:53.174444 1152 storage/engine/rocksdb.go:708 closing rocksdb instance at "/tmp/TestRaftSSTableSideloadingSnapshot847639563"
[08:42:53][Step 2/2] --- PASS: TestRaftSSTableSideloadingSnapshot (0.09s)
[08:42:53][Step 2/2] === RUN TestRaftSSTableSideloadingTruncation
[08:42:53][Step 2/2] --- PASS: TestRaftSSTableSideloadingTruncation (0.10s)
[08:42:53][Step 2/2] === RUN TestRaftSSTableSideloadingUpdatedReplicaID
[08:42:53][Step 2/2] I181018 08:36:53.286628 3761 storage/engine/rocksdb.go:575 opening rocksdb instance at "/tmp/TestRaftSSTableSideloadingUpdatedReplicaID618281077"
[08:42:53][Step 2/2] I181018 08:36:53.343224 3761 storage/replica_sideload_test.go:1043 olddir is /tmp/TestRaftSSTableSideloadingUpdatedReplicaID618281077/auxiliary/sideloading/1/1.1, newdir is /tmp/TestRaftSSTableSideloadingUpdatedReplicaID618281077/auxiliary/sideloading/1/1.2
[08:42:53][Step 2/2] I181018 08:36:53.346065 3761 storage/engine/rocksdb.go:708 closing rocksdb instance at "/tmp/TestRaftSSTableSideloadingUpdatedReplicaID618281077"
[08:42:53][Step 2/2] --- PASS: TestRaftSSTableSideloadingUpdatedReplicaID (0.06s)
[08:42:53][Step 2/2] === RUN TestReplicaStats
[08:42:53][Step 2/2] --- PASS: TestReplicaStats (0.01s)
[08:42:53][Step 2/2] === RUN TestReplicaStatsDecay
[08:42:53][Step 2/2] --- PASS: TestReplicaStatsDecay (0.00s)
[08:42:53][Step 2/2] === RUN TestReplicaStatsDecaySmoothing
[08:42:53][Step 2/2] --- PASS: TestReplicaStatsDecaySmoothing (0.01s)
[08:42:53][Step 2/2] === RUN TestIsOnePhaseCommit
[08:42:53][Step 2/2] --- PASS: TestIsOnePhaseCommit (0.01s)
[08:42:53][Step 2/2] === RUN TestReplicaContains
[08:42:53][Step 2/2] --- PASS: TestReplicaContains (0.00s)
[08:42:53][Step 2/2] === RUN TestReplicaReadConsistency
[08:42:53][Step 2/2] --- PASS: TestReplicaReadConsistency (0.05s)
[08:42:53][Step 2/2] === RUN TestBehaviorDuringLeaseTransfer
[08:42:53][Step 2/2] I181018 08:36:53.467126 4129 storage/replica_command.go:75 [s1,r1/1:/M{in-ax}] test injecting error: storage/replica_test.go:611: injected transfer error
[08:42:53][Step 2/2] --- PASS: TestBehaviorDuringLeaseTransfer (0.05s)
[08:42:53][Step 2/2] === RUN TestApplyCmdLeaseError
[08:42:53][Step 2/2] --- PASS: TestApplyCmdLeaseError (0.05s)
[08:42:53][Step 2/2] === RUN TestLeaseReplicaNotInDesc
[08:42:53][Step 2/2] --- PASS: TestLeaseReplicaNotInDesc (0.04s)
[08:42:53][Step 2/2] === RUN TestReplicaRangeBoundsChecking
[08:42:53][Step 2/2] I181018 08:36:53.602805 4420 util/protoutil/randnullability.go:94 inserting null for (roachpb.Lease).Expiration: false
[08:42:53][Step 2/2] I181018 08:36:53.602899 4420 util/protoutil/randnullability.go:94 inserting null for (roachpb.Lease).DeprecatedStartStasis: false
[08:42:53][Step 2/2] --- PASS: TestReplicaRangeBoundsChecking (0.05s)
[08:42:53][Step 2/2] === RUN TestReplicaLease
[08:42:53][Step 2/2] --- PASS: TestReplicaLease (0.05s)
[08:42:53][Step 2/2] === RUN TestReplicaNotLeaseHolderError
[08:42:53][Step 2/2] --- PASS: TestReplicaNotLeaseHolderError (0.10s)
[08:42:53][Step 2/2] === RUN TestReplicaLeaseCounters
[08:42:53][Step 2/2] --- PASS: TestReplicaLeaseCounters (0.05s)
[08:42:53][Step 2/2] === RUN TestReplicaGossipConfigsOnLease
[08:42:53][Step 2/2] --- PASS: TestReplicaGossipConfigsOnLease (0.05s)
[08:42:53][Step 2/2] === RUN TestReplicaTSCacheLowWaterOnLease
[08:42:53][Step 2/2] --- PASS: TestReplicaTSCacheLowWaterOnLease (0.06s)
[08:42:53][Step 2/2] === RUN TestReplicaLeaseRejectUnknownRaftNodeID
[08:42:53][Step 2/2] --- PASS: TestReplicaLeaseRejectUnknownRaftNodeID (0.09s)
[08:42:53][Step 2/2] === RUN TestReplicaDrainLease
[08:42:53][Step 2/2] --- PASS: TestReplicaDrainLease (0.04s)
[08:42:53][Step 2/2] === RUN TestReplicaGossipFirstRange
[08:42:53][Step 2/2] --- PASS: TestReplicaGossipFirstRange (0.04s)
[08:42:53][Step 2/2] === RUN TestReplicaGossipAllConfigs
[08:42:53][Step 2/2] --- PASS: TestReplicaGossipAllConfigs (0.04s)
[08:42:53][Step 2/2] === RUN TestReplicaNoGossipConfig
[08:42:53][Step 2/2] --- PASS: TestReplicaNoGossipConfig (0.10s)
[08:42:53][Step 2/2] === RUN TestReplicaNoGossipFromNonLeader
[08:42:53][Step 2/2] I181018 08:36:54.265718 5353 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] storage.pendingLeaseRequest: requesting lease
[08:42:53][Step 2/2] --- PASS: TestReplicaNoGossipFromNonLeader (0.06s)
[08:42:53][Step 2/2] === RUN TestOptimizePuts
[08:42:53][Step 2/2] --- PASS: TestOptimizePuts (0.05s)
[08:42:53][Step 2/2] === RUN TestAcquireLease
[08:42:53][Step 2/2] === RUN TestAcquireLease/#00
[08:42:53][Step 2/2] === RUN TestAcquireLease/#00/withMinLeaseProposedTS=false
[08:42:53][Step 2/2] === RUN TestAcquireLease/#00/withMinLeaseProposedTS=true
[08:42:53][Step 2/2] === RUN TestAcquireLease/#01
[08:42:53][Step 2/2] === RUN TestAcquireLease/#01/withMinLeaseProposedTS=false
[08:42:53][Step 2/2] === RUN TestAcquireLease/#01/withMinLeaseProposedTS=true
[08:42:53][Step 2/2] --- PASS: TestAcquireLease (0.18s)
[08:42:53][Step 2/2] --- PASS: TestAcquireLease/#00 (0.08s)
[08:42:53][Step 2/2] --- PASS: TestAcquireLease/#00/withMinLeaseProposedTS=false (0.04s)
[08:42:53][Step 2/2] --- PASS: TestAcquireLease/#00/withMinLeaseProposedTS=true (0.04s)
[08:42:53][Step 2/2] --- PASS: TestAcquireLease/#01 (0.09s)
[08:42:53][Step 2/2] --- PASS: TestAcquireLease/#01/withMinLeaseProposedTS=false (0.05s)
[08:42:53][Step 2/2] --- PASS: TestAcquireLease/#01/withMinLeaseProposedTS=true (0.04s)
[08:42:53][Step 2/2] === RUN TestLeaseConcurrent
[08:42:53][Step 2/2] === RUN TestLeaseConcurrent/withError=false
[08:42:53][Step 2/2] === RUN TestLeaseConcurrent/withError=true
[08:42:53][Step 2/2] --- PASS: TestLeaseConcurrent (0.08s)
[08:42:53][Step 2/2] --- PASS: TestLeaseConcurrent/withError=false (0.04s)
[08:42:53][Step 2/2] --- PASS: TestLeaseConcurrent/withError=true (0.03s)
[08:42:53][Step 2/2] === RUN TestReplicaUpdateTSCache
[08:42:53][Step 2/2] --- PASS: TestReplicaUpdateTSCache (0.05s)
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/read-read
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/read-read-local
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/read-read-addRead
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/read-read-addRead-local
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/read-write
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/read-write-local
[08:42:53][Step 2/2] I181018 08:36:54.853024 6388 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async]
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/read-write-addRead
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/read-write-addRead-local
[08:42:53][Step 2/2] I181018 08:36:54.917384 6496 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async]
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/write-read
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/write-read-local
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/write-read-addWrite
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/write-read-addWrite-local
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/write-write
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/write-write-local
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/write-write-addWrite
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueue/write-write-addWrite-local
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue (0.59s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/read-read (0.03s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/read-read-local (0.03s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/read-read-addRead (0.03s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/read-read-addRead-local (0.03s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/read-write (0.03s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/read-write-local (0.04s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/read-write-addRead (0.03s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/read-write-addRead-local (0.03s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/write-read (0.03s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/write-read-local (0.04s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/write-read-addWrite (0.03s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/write-read-addWrite-local (0.05s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/write-write (0.04s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/write-write-local (0.04s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/write-write-addWrite (0.04s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueue/write-write-addWrite-local (0.04s)
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueInconsistent
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueInconsistent/READ_UNCOMMITTED
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueInconsistent/INCONSISTENT
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueInconsistent (0.08s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueInconsistent/READ_UNCOMMITTED (0.03s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueInconsistent/INCONSISTENT (0.03s)
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SingleDependency
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SingleDependency/RR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SingleDependency/RR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SingleDependency/RC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SingleDependency/RC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SingleDependency/CR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SingleDependency/CR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SingleDependency/CC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SingleDependency/CC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SingleDependency/CC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/RRR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/RRR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/RRC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/RRC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/RCR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/RCR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/RCC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/RCC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/RCC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/CRR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/CRR/Forward
[08:42:53][Step 2/2] I181018 08:36:55.974278 8417 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] test
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/CRC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/CRC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/CRC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/CCR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/CCR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/CCR/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/CCC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/CCC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/MultipleDependencies/CCC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/RRR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/RRR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/RRC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/RRC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/RCR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/RCR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/RCC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/RCC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/RCC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/CRR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/CRR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/CRC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/CRC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/CRC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/CCR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/CCR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/CCR/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/CCC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/CCC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/DependencyChain/CCC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RRRR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RRRR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RRRC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RRRC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RRCR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RRCR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RRCC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RRCC/Forward
[08:42:53][Step 2/2] I181018 08:36:57.324113 10830 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] test
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RRCC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RCRR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RCRR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RCRC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RCRC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RCRC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RCCR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RCCR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RCCR/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RCCC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RCCC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/RCCC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CRRR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CRRR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CRRC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CRRC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CRRC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CRCR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CRCR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CRCR/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CRCC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CRCC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CRCC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CCRR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CCRR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CCRR/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CCRC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CCRC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CCRC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CCCR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CCCR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CCCR/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CCCC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CCCC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/SplitDependencyChain/CCCC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRRR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRRR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRRC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRRC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRCR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRCR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRCC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRCC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRCC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCRR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCRR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCRC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCRC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCRC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCCR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCCR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCCR/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCCC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCCC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCCC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRRR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRRR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRRC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRRC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRRC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRCR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRCR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRCR/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRCC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRCC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRCC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCRR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCRR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCRR/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCRC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCRC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCRC/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCCR
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCCR/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCCR/Reverse
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCCC
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCCC/Forward
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCCC/Reverse
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation (5.34s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SingleDependency (0.28s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SingleDependency/RR (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SingleDependency/RR/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SingleDependency/RC (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SingleDependency/RC/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SingleDependency/CR (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SingleDependency/CR/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SingleDependency/CC (0.09s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SingleDependency/CC/Forward (0.05s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SingleDependency/CC/Reverse (0.04s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies (0.72s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/RRR (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/RRR/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/RRC (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/RRC/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/RCR (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/RCR/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/RCC (0.11s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/RCC/Forward (0.05s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/RCC/Reverse (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/CRR (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/CRR/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/CRC (0.12s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/CRC/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/CRC/Reverse (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/CCR (0.13s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/CCR/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/CCR/Reverse (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/CCC (0.09s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/CCC/Forward (0.05s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/MultipleDependencies/CCC/Reverse (0.05s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain (0.74s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/RRR (0.08s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/RRR/Forward (0.08s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/RRC (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/RRC/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/RCR (0.08s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/RCR/Forward (0.08s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/RCC (0.11s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/RCC/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/RCC/Reverse (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/CRR (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/CRR/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/CRC (0.11s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/CRC/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/CRC/Reverse (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/CCR (0.11s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/CCR/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/CCR/Reverse (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/CCC (0.10s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/CCC/Forward (0.05s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/DependencyChain/CCC/Reverse (0.05s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain (1.77s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RRRR (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RRRR/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RRRC (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RRRC/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RRCR (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RRCR/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RRCC (0.12s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RRCC/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RRCC/Reverse (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RCRR (0.08s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RCRR/Forward (0.08s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RCRC (0.14s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RCRC/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RCRC/Reverse (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RCCR (0.14s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RCCR/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RCCR/Reverse (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RCCC (0.11s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RCCC/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/RCCC/Reverse (0.05s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CRRR (0.09s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CRRR/Forward (0.09s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CRRC (0.14s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CRRC/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CRRC/Reverse (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CRCR (0.13s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CRCR/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CRCR/Reverse (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CRCC (0.13s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CRCC/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CRCC/Reverse (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CCRR (0.14s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CCRR/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CCRR/Reverse (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CCRC (0.12s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CCRC/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CCRC/Reverse (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CCCR (0.12s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CCCR/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CCCR/Reverse (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CCCC (0.10s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CCCC/Forward (0.05s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/SplitDependencyChain/CCCC/Reverse (0.05s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain (1.83s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRRR (0.09s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRRR/Forward (0.09s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRRC (0.08s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRRC/Forward (0.08s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRCR (0.08s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRCR/Forward (0.08s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRCC (0.13s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRCC/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RRCC/Reverse (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCRR (0.08s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCRR/Forward (0.08s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCRC (0.14s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCRC/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCRC/Reverse (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCCR (0.14s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCCR/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCCR/Reverse (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCCC (0.11s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCCC/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/RCCC/Reverse (0.05s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRRR (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRRR/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRRC (0.14s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRRC/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRRC/Reverse (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRCR (0.13s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRCR/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRCR/Reverse (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRCC (0.12s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRCC/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CRCC/Reverse (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCRR (0.14s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCRR/Forward (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCRR/Reverse (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCRC (0.12s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCRC/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCRC/Reverse (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCCR (0.12s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCCR/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCCR/Reverse (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCCC (0.11s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCCC/Forward (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellation/NonOverlappingDependencyChain/CCCC/Reverse (0.05s)
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationCascade
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationCascade (3.02s)
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationRandom
[08:42:53][Step 2/2] I181018 08:37:03.919326 16832 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] test
[08:42:53][Step 2/2] I181018 08:37:05.507937 16832 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] test
[08:42:53][Step 2/2] I181018 08:37:06.014477 16832 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 2 [async] test
[08:42:53][Step 2/2] I181018 08:37:06.014801 16832 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] test
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationRandom (2.85s)
[08:42:53][Step 2/2] replica_test.go:2632: running with seed 6209671620797666722
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/16266
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelEndTxn
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelEndTxn/[1_2]
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelEndTxn/[2_1]
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelResolveIntent
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelResolveIntent/[3_4]
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelResolveIntent/[4_3]
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelResolveIntent/[6_3]
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelResolveIntent/[6_5_4_3_2]
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelResolveIntent/[2_3_4_5_6]
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelResolveIntent/[2_6_3_5_4]
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelSplit
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelSplit/[3]
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelSplit/[3_6]
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelSplit/[6_3]
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelSplit/[1_3_5_6]
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueCancellationLocal/CancelSplit/[5_1_6_3]
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal (1.50s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/16266 (0.08s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelEndTxn (0.17s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelEndTxn/[1_2] (0.09s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelEndTxn/[2_1] (0.09s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelResolveIntent (0.69s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelResolveIntent/[3_4] (0.13s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelResolveIntent/[4_3] (0.13s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelResolveIntent/[6_3] (0.13s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelResolveIntent/[6_5_4_3_2] (0.10s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelResolveIntent/[2_3_4_5_6] (0.11s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelResolveIntent/[2_6_3_5_4] (0.09s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelSplit (0.55s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelSplit/[3] (0.13s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelSplit/[3_6] (0.12s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelSplit/[6_3] (0.12s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelSplit/[1_3_5_6] (0.09s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueCancellationLocal/CancelSplit/[5_1_6_3] (0.09s)
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueSelfOverlap
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueSelfOverlap/cmd1Read=false
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueSelfOverlap/cmd1Read=false/cmd2Read=false
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueSelfOverlap/cmd1Read=false/cmd2Read=true
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueSelfOverlap/cmd1Read=true
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueSelfOverlap/cmd1Read=true/cmd2Read=false
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueSelfOverlap/cmd1Read=true/cmd2Read=true
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueSelfOverlap (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueSelfOverlap/cmd1Read=false (0.01s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueSelfOverlap/cmd1Read=false/cmd2Read=false (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueSelfOverlap/cmd1Read=false/cmd2Read=true (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueSelfOverlap/cmd1Read=true (0.01s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueSelfOverlap/cmd1Read=true/cmd2Read=false (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueSelfOverlap/cmd1Read=true/cmd2Read=true (0.00s)
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueTimestampNonInterference
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:0}_key:[97]_readerFirst:true_interferes:true}
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:0}_key:[98]_readerFirst:false_interferes:true}
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:1}_key:[99]_readerFirst:true_interferes:false}
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:1}_key:[100]_readerFirst:false_interferes:false}
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:1}_writerTS:{WallTime:1_Logical:0}_key:[101]_readerFirst:true_interferes:true}
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:1}_writerTS:{WallTime:1_Logical:0}_key:[102]_readerFirst:false_interferes:true}
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:1}_key:[1_107_18_97_0_1_114_100_115_99]_readerFirst:true_interferes:true}
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:1}_key:[1_107_18_98_0_1_114_100_115_99]_readerFirst:false_interferes:true}
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueTimestampNonInterference (0.15s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:0}_key:[97]_readerFirst:true_interferes:true} (0.01s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:0}_key:[98]_readerFirst:false_interferes:true} (0.01s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:1}_key:[99]_readerFirst:true_interferes:false} (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:1}_key:[100]_readerFirst:false_interferes:false} (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:1}_writerTS:{WallTime:1_Logical:0}_key:[101]_readerFirst:true_interferes:true} (0.01s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:1}_writerTS:{WallTime:1_Logical:0}_key:[102]_readerFirst:false_interferes:true} (0.02s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:1}_key:[1_107_18_97_0_1_114_100_115_99]_readerFirst:true_interferes:true} (0.01s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueTimestampNonInterference/{readerTS:{WallTime:1_Logical:0}_writerTS:{WallTime:1_Logical:1}_key:[1_107_18_98_0_1_114_100_115_99]_readerFirst:false_interferes:true} (0.01s)
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueueSplitDeclaresWrites
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueueSplitDeclaresWrites (0.01s)
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueuePrereqDebugSummary
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueuePrereqDebugSummary/no_prereqs
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueuePrereqDebugSummary/{write/global:_[1_Put,_1_BeginTxn,_1_EndTxn]}
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueuePrereqDebugSummary/{read/global:_[1_BeginTxn]}_{read/local:_[1_BeginTxn]}_{write/global:_[3_Put]}_{write/local:_[1_Put,_1_BeginTxn,_1_EndTxn]}
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueuePrereqDebugSummary/{read/local:_[1_BeginTxn]}_{write/local:_[1_BeginTxn],_[3_Put],_[1_Put,_1_BeginTxn,_1_EndTxn],_[1_BeginTxn]}
[08:42:53][Step 2/2] === RUN TestReplicaCommandQueuePrereqDebugSummary/{read/local:_[1_BeginTxn]}_{write/local:_[1_BeginTxn],_[3_Put],_[1_Put,_1_BeginTxn,_1_EndTxn],_[1_BeginTxn],_[3_Put],_...}
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueuePrereqDebugSummary (0.01s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueuePrereqDebugSummary/no_prereqs (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueuePrereqDebugSummary/{write/global:_[1_Put,_1_BeginTxn,_1_EndTxn]} (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueuePrereqDebugSummary/{read/global:_[1_BeginTxn]}_{read/local:_[1_BeginTxn]}_{write/global:_[3_Put]}_{write/local:_[1_Put,_1_BeginTxn,_1_EndTxn]} (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueuePrereqDebugSummary/{read/local:_[1_BeginTxn]}_{write/local:_[1_BeginTxn],_[3_Put],_[1_Put,_1_BeginTxn,_1_EndTxn],_[1_BeginTxn]} (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaCommandQueuePrereqDebugSummary/{read/local:_[1_BeginTxn]}_{write/local:_[1_BeginTxn],_[3_Put],_[1_Put,_1_BeginTxn,_1_EndTxn],_[1_BeginTxn],_[3_Put],_...} (0.00s)
[08:42:53][Step 2/2] === RUN TestReplicaUseTSCache
[08:42:53][Step 2/2] --- PASS: TestReplicaUseTSCache (0.05s)
[08:42:53][Step 2/2] === RUN TestConditionalPutUpdatesTSCacheOnError
[08:42:53][Step 2/2] --- PASS: TestConditionalPutUpdatesTSCacheOnError (0.07s)
[08:42:53][Step 2/2] === RUN TestReplicaNoTSCacheInconsistent
[08:42:53][Step 2/2] === RUN TestReplicaNoTSCacheInconsistent/READ_UNCOMMITTED
[08:42:53][Step 2/2] === RUN TestReplicaNoTSCacheInconsistent/INCONSISTENT
[08:42:53][Step 2/2] --- PASS: TestReplicaNoTSCacheInconsistent (0.13s)
[08:42:53][Step 2/2] --- PASS: TestReplicaNoTSCacheInconsistent/READ_UNCOMMITTED (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaNoTSCacheInconsistent/INCONSISTENT (0.06s)
[08:42:53][Step 2/2] === RUN TestReplicaNoTSCacheUpdateOnFailure
[08:42:53][Step 2/2] --- PASS: TestReplicaNoTSCacheUpdateOnFailure (0.07s)
[08:42:53][Step 2/2] === RUN TestReplicaNoTimestampIncrementWithinTxn
[08:42:53][Step 2/2] --- PASS: TestReplicaNoTimestampIncrementWithinTxn (0.06s)
[08:42:53][Step 2/2] === RUN TestReplicaAbortSpanReadError
[08:42:53][Step 2/2] E181018 08:37:08.691514 21273 storage/replica.go:6712 [s1,r1/1:/M{in-ax}] stalling replica due to: could not read from AbortSpan: proto: illegal wireType 6
[08:42:53][Step 2/2] --- PASS: TestReplicaAbortSpanReadError (0.11s)
[08:42:53][Step 2/2] === RUN TestReplicaAbortSpanStoredTxnRetryError
[08:42:53][Step 2/2] --- PASS: TestReplicaAbortSpanStoredTxnRetryError (0.11s)
[08:42:53][Step 2/2] === RUN TestReplicaAbortSpanOnlyWithIntent
[08:42:53][Step 2/2] --- PASS: TestReplicaAbortSpanOnlyWithIntent (0.05s)
[08:42:53][Step 2/2] === RUN TestEndTransactionDeadline
[08:42:53][Step 2/2] --- PASS: TestEndTransactionDeadline (0.13s)
[08:42:53][Step 2/2] === RUN TestSerializableDeadline
[08:42:53][Step 2/2] === RUN TestSerializableDeadline/SNAPSHOT
[08:42:53][Step 2/2] === RUN TestSerializableDeadline/SERIALIZABLE
[08:42:53][Step 2/2] --- PASS: TestSerializableDeadline (0.07s)
[08:42:53][Step 2/2] --- PASS: TestSerializableDeadline/SNAPSHOT (0.01s)
[08:42:53][Step 2/2] --- PASS: TestSerializableDeadline/SERIALIZABLE (0.01s)
[08:42:53][Step 2/2] === RUN TestEndTransactionTxnSpanGCThreshold
[08:42:53][Step 2/2] --- PASS: TestEndTransactionTxnSpanGCThreshold (0.07s)
[08:42:53][Step 2/2] === RUN TestEndTransactionDeadline_1PC
[08:42:53][Step 2/2] --- PASS: TestEndTransactionDeadline_1PC (0.06s)
[08:42:53][Step 2/2] === RUN Test1PCTransactionWriteTimestamp
[08:42:53][Step 2/2] === RUN Test1PCTransactionWriteTimestamp/SNAPSHOT
[08:42:53][Step 2/2] === RUN Test1PCTransactionWriteTimestamp/SERIALIZABLE
[08:42:53][Step 2/2] --- PASS: Test1PCTransactionWriteTimestamp (0.05s)
[08:42:53][Step 2/2] --- PASS: Test1PCTransactionWriteTimestamp/SNAPSHOT (0.01s)
[08:42:53][Step 2/2] --- PASS: Test1PCTransactionWriteTimestamp/SERIALIZABLE (0.00s)
[08:42:53][Step 2/2] === RUN TestEndTransactionWithMalformedSplitTrigger
[08:42:53][Step 2/2] E181018 08:37:09.352405 22080 storage/replica.go:6712 [s1,r1/1:/M{in-ax}] stalling replica due to: range does not match splits: ("bar"-"foo") + ("foo"-/Max) != [n1,s1,r1/1:/M{in-ax}]
[08:42:53][Step 2/2] --- PASS: TestEndTransactionWithMalformedSplitTrigger (0.07s)
[08:42:53][Step 2/2] === RUN TestEndTransactionBeforeHeartbeat
[08:42:53][Step 2/2] --- PASS: TestEndTransactionBeforeHeartbeat (0.08s)
[08:42:53][Step 2/2] === RUN TestEndTransactionAfterHeartbeat
[08:42:53][Step 2/2] --- PASS: TestEndTransactionAfterHeartbeat (0.08s)
[08:42:53][Step 2/2] === RUN TestEndTransactionWithPushedTimestamp
[08:42:53][Step 2/2] --- PASS: TestEndTransactionWithPushedTimestamp (0.15s)
[08:42:53][Step 2/2] === RUN TestEndTransactionWithIncrementedEpoch
[08:42:53][Step 2/2] --- PASS: TestEndTransactionWithIncrementedEpoch (0.07s)
[08:42:53][Step 2/2] === RUN TestEndTransactionWithErrors
[08:42:53][Step 2/2] --- PASS: TestEndTransactionWithErrors (0.06s)
[08:42:53][Step 2/2] === RUN TestEndTransactionRollbackAbortedTransaction
[08:42:53][Step 2/2] === RUN TestEndTransactionRollbackAbortedTransaction/populateAbortSpan=false
[08:42:53][Step 2/2] === RUN TestEndTransactionRollbackAbortedTransaction/populateAbortSpan=true
[08:42:53][Step 2/2] --- PASS: TestEndTransactionRollbackAbortedTransaction (0.13s)
[08:42:53][Step 2/2] --- PASS: TestEndTransactionRollbackAbortedTransaction/populateAbortSpan=false (0.06s)
[08:42:53][Step 2/2] --- PASS: TestEndTransactionRollbackAbortedTransaction/populateAbortSpan=true (0.06s)
[08:42:53][Step 2/2] === RUN TestRaftRetryProtectionInTxn
[08:42:53][Step 2/2] --- PASS: TestRaftRetryProtectionInTxn (0.06s)
[08:42:53][Step 2/2] === RUN TestReplicaLaziness
[08:42:53][Step 2/2] --- PASS: TestReplicaLaziness (0.15s)
[08:42:53][Step 2/2] === RUN TestRaftRetryCantCommitIntents
[08:42:53][Step 2/2] === RUN TestRaftRetryCantCommitIntents/SERIALIZABLE
[08:42:53][Step 2/2] === RUN TestRaftRetryCantCommitIntents/SNAPSHOT
[08:42:53][Step 2/2] --- PASS: TestRaftRetryCantCommitIntents (0.09s)
[08:42:53][Step 2/2] --- PASS: TestRaftRetryCantCommitIntents/SERIALIZABLE (0.02s)
[08:42:53][Step 2/2] --- PASS: TestRaftRetryCantCommitIntents/SNAPSHOT (0.02s)
[08:42:53][Step 2/2] === RUN TestDuplicateBeginTransaction
[08:42:53][Step 2/2] --- PASS: TestDuplicateBeginTransaction (0.07s)
[08:42:53][Step 2/2] === RUN TestEndTransactionLocalGC
[08:42:53][Step 2/2] W181018 08:37:10.351042 22915 storage/engine/mvcc.go:2232 [s1,r1/1:{/Min-c}] unable to find value for "a" @ 1539851830.347201352,0: 1539851830.340792020,0 (txn={ID:d02276b9-057a-44b4-a80e-6f421040b622 Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:1539851830.347201352,0 Priority:53726 Sequence:2 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:37:10.351328 22915 storage/engine/mvcc.go:2192 [s1,r1/1:{/Min-c}] unable to find value for "b" ({ID:d02276b9-057a-44b4-a80e-6f421040b622 Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:1539851830.347201352,0 Priority:53726 Sequence:2 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] I181018 08:37:10.372875 23212 storage/replica_command.go:75 [s1,r1/1:{/Min-c}] test injecting error: boom
[08:42:53][Step 2/2] W181018 08:37:10.373632 23212 storage/intent_resolver.go:745 [s1] failed to cleanup transaction intents: could not GC completed transaction anchored at "a": boom
[08:42:53][Step 2/2] I181018 08:37:10.374437 23536 storage/replica_command.go:75 [s1,r1/1:{/Min-c}] test injecting error: boom
[08:42:53][Step 2/2] W181018 08:37:10.376067 22915 storage/engine/mvcc.go:2232 [s1,r1/1:{/Min-c}] unable to find value for "a" @ 1539851830.371173462,0: 1539851830.340792020,0 (txn={ID:3f172fcd-11f3-4a0f-982d-e609175f88d3 Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:1539851830.371173462,0 Priority:1017680 Sequence:2 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] I181018 08:37:10.379356 22915 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] W181018 08:37:10.380072 23565 storage/engine/mvcc.go:2192 [s1,r2/1:{c-/Max}] unable to find value for "c" ({ID:3f172fcd-11f3-4a0f-982d-e609175f88d3 Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:1539851830.371173462,0 Priority:1017680 Sequence:2 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] I181018 08:37:10.380891 23565 storage/replica_command.go:75 [s1,r1/1:{/Min-c}] test injecting error: boom
[08:42:53][Step 2/2] --- PASS: TestEndTransactionLocalGC (0.09s)
[08:42:53][Step 2/2] === RUN TestEndTransactionResolveOnlyLocalIntents
[08:42:53][Step 2/2] I181018 08:37:10.461088 23213 storage/replica_command.go:75 [s1,r2/1:{a -/Max}] test injecting error: boom
[08:42:53][Step 2/2] W181018 08:37:10.461609 23213 storage/intent_resolver.go:745 [s1] failed to cleanup transaction intents: failed to resolve intents: boom
[08:42:53][Step 2/2] --- PASS: TestEndTransactionResolveOnlyLocalIntents (0.08s)
[08:42:53][Step 2/2] === RUN TestEndTransactionDirectGC
[08:42:53][Step 2/2] I181018 08:37:10.544682 23828 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] I181018 08:37:10.619431 23828 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] I181018 08:37:10.700548 23828 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] --- PASS: TestEndTransactionDirectGC (0.25s)
[08:42:53][Step 2/2] === RUN TestEndTransactionDirectGCFailure
[08:42:53][Step 2/2] I181018 08:37:10.799588 24116 storage/replica_command.go:75 [s1,r2/1:{a -/Max}] test injecting error: boom
[08:42:53][Step 2/2] W181018 08:37:10.800054 24116 storage/intent_resolver.go:745 [s1] failed to cleanup transaction intents: failed to resolve intents: boom
[08:42:53][Step 2/2] --- PASS: TestEndTransactionDirectGCFailure (0.11s)
[08:42:53][Step 2/2] === RUN TestEndTransactionDirectGC_1PC
[08:42:53][Step 2/2] --- PASS: TestEndTransactionDirectGC_1PC (0.11s)
[08:42:53][Step 2/2] === RUN TestReplicaTransactionRequires1PC
[08:42:53][Step 2/2] === RUN TestReplicaTransactionRequires1PC/#00
[08:42:53][Step 2/2] === RUN TestReplicaTransactionRequires1PC/#01
[08:42:53][Step 2/2] I181018 08:37:10.986181 24217 storage/replica_command.go:75 [s1,r1/1:/M{in-ax}] test injecting error: storage/replica_test.go:5173: injected error
[08:42:53][Step 2/2] --- PASS: TestReplicaTransactionRequires1PC (0.05s)
[08:42:53][Step 2/2] --- PASS: TestReplicaTransactionRequires1PC/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaTransactionRequires1PC/#01 (0.00s)
[08:42:53][Step 2/2] === RUN TestReplicaEndTransactionWithRequire1PC
[08:42:53][Step 2/2] --- PASS: TestReplicaEndTransactionWithRequire1PC (0.11s)
[08:42:53][Step 2/2] === RUN TestReplicaResolveIntentNoWait
[08:42:53][Step 2/2] W181018 08:37:11.183627 24556 storage/engine/mvcc.go:2192 [s1,r2/1:{aa-/Max}] unable to find value for "zresolveme" ({ID:df3ff547-a33e-443c-b3b7-467be1ae5f85 Isolation:SERIALIZABLE Key:[122 114 101 115 111 108 118 101 109 101] Epoch:0 Timestamp:1539851831.182714465,0 Priority:128297 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] I181018 08:37:11.184334 24556 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] W181018 08:37:11.185454 24470 storage/intent_resolver.go:745 [s1] failed to cleanup transaction intents: failed to resolve intents: result is ambiguous (server shutdown)
[08:42:53][Step 2/2] --- PASS: TestReplicaResolveIntentNoWait (0.08s)
[08:42:53][Step 2/2] === RUN TestAbortSpanPoisonOnResolve
[08:42:53][Step 2/2] --- PASS: TestAbortSpanPoisonOnResolve (0.27s)
[08:42:53][Step 2/2] === RUN TestAbortSpanError
[08:42:53][Step 2/2] --- PASS: TestAbortSpanError (0.12s)
[08:42:53][Step 2/2] === RUN TestPushTxnBadKey
[08:42:53][Step 2/2] --- PASS: TestPushTxnBadKey (0.05s)
[08:42:53][Step 2/2] === RUN TestPushTxnAlreadyCommittedOrAborted
[08:42:53][Step 2/2] --- PASS: TestPushTxnAlreadyCommittedOrAborted (0.13s)
[08:42:53][Step 2/2] === RUN TestPushTxnUpgradeExistingTxn
[08:42:53][Step 2/2] --- PASS: TestPushTxnUpgradeExistingTxn (0.08s)
[08:42:53][Step 2/2] === RUN TestPushTxnQueryPusheeHasNewerVersion
[08:42:53][Step 2/2] --- PASS: TestPushTxnQueryPusheeHasNewerVersion (0.06s)
[08:42:53][Step 2/2] === RUN TestPushTxnHeartbeatTimeout
[08:42:53][Step 2/2] --- PASS: TestPushTxnHeartbeatTimeout (0.14s)
[08:42:53][Step 2/2] === RUN TestResolveIntentPushTxnReplyTxn
[08:42:53][Step 2/2] --- PASS: TestResolveIntentPushTxnReplyTxn (0.06s)
[08:42:53][Step 2/2] === RUN TestPushTxnPriorities
[08:42:53][Step 2/2] --- PASS: TestPushTxnPriorities (0.10s)
[08:42:53][Step 2/2] === RUN TestPushTxnPushTimestamp
[08:42:53][Step 2/2] --- PASS: TestPushTxnPushTimestamp (0.10s)
[08:42:53][Step 2/2] === RUN TestPushTxnPushTimestampAlreadyPushed
[08:42:53][Step 2/2] --- PASS: TestPushTxnPushTimestampAlreadyPushed (0.07s)
[08:42:53][Step 2/2] === RUN TestPushTxnSerializableRestart
[08:42:53][Step 2/2] --- PASS: TestPushTxnSerializableRestart (0.06s)
[08:42:53][Step 2/2] === RUN TestQueryIntentRequest
[08:42:53][Step 2/2] === RUN TestQueryIntentRequest/SNAPSHOT
[08:42:53][Step 2/2] === RUN TestQueryIntentRequest/SNAPSHOT/behavior=DO_NOTHING
[08:42:53][Step 2/2] === RUN TestQueryIntentRequest/SNAPSHOT/behavior=RETURN_ERROR
[08:42:53][Step 2/2] === RUN TestQueryIntentRequest/SERIALIZABLE
[08:42:53][Step 2/2] === RUN TestQueryIntentRequest/SERIALIZABLE/behavior=DO_NOTHING
[08:42:53][Step 2/2] === RUN TestQueryIntentRequest/SERIALIZABLE/behavior=RETURN_ERROR
[08:42:53][Step 2/2] === RUN TestQueryIntentRequest/SERIALIZABLE/behavior=PREVENT
[08:42:53][Step 2/2] --- PASS: TestQueryIntentRequest (0.33s)
[08:42:53][Step 2/2] --- PASS: TestQueryIntentRequest/SNAPSHOT (0.13s)
[08:42:53][Step 2/2] --- PASS: TestQueryIntentRequest/SNAPSHOT/behavior=DO_NOTHING (0.06s)
[08:42:53][Step 2/2] --- PASS: TestQueryIntentRequest/SNAPSHOT/behavior=RETURN_ERROR (0.07s)
[08:42:53][Step 2/2] --- PASS: TestQueryIntentRequest/SERIALIZABLE (0.19s)
[08:42:53][Step 2/2] --- PASS: TestQueryIntentRequest/SERIALIZABLE/behavior=DO_NOTHING (0.06s)
[08:42:53][Step 2/2] --- PASS: TestQueryIntentRequest/SERIALIZABLE/behavior=RETURN_ERROR (0.07s)
[08:42:53][Step 2/2] --- PASS: TestQueryIntentRequest/SERIALIZABLE/behavior=PREVENT (0.06s)
[08:42:53][Step 2/2] === RUN TestReplicaResolveIntentRange
[08:42:53][Step 2/2] --- PASS: TestReplicaResolveIntentRange (0.06s)
[08:42:53][Step 2/2] === RUN TestRangeStatsComputation
[08:42:53][Step 2/2] --- PASS: TestRangeStatsComputation (0.07s)
[08:42:53][Step 2/2] === RUN TestMerge
[08:42:53][Step 2/2] --- PASS: TestMerge (0.13s)
[08:42:53][Step 2/2] === RUN TestTruncateLog
[08:42:53][Step 2/2] I181018 08:37:13.121476 26775 storage/batcheval/cmd_truncate_log.go:59 [s1,r1/1:/M{in-ax}] attempting to truncate raft logs for another range: r2. Normally this is due to a merge and can be ignored.
[08:42:53][Step 2/2] --- PASS: TestTruncateLog (0.10s)
[08:42:53][Step 2/2] === RUN TestConditionFailedError
[08:42:53][Step 2/2] --- PASS: TestConditionFailedError (0.07s)
[08:42:53][Step 2/2] === RUN TestReplicaSetsEqual
[08:42:53][Step 2/2] --- PASS: TestReplicaSetsEqual (0.01s)
[08:42:53][Step 2/2] === RUN TestAppliedIndex
[08:42:53][Step 2/2] --- PASS: TestAppliedIndex (0.14s)
[08:42:53][Step 2/2] === RUN TestReplicaCorruption
[08:42:53][Step 2/2] I181018 08:37:13.408564 26952 storage/replica_command.go:75 [s1,r1/1:/M{in-ax}] test injecting error: replica corruption (processed=false): boom
[08:42:53][Step 2/2] E181018 08:37:13.408764 26952 storage/replica.go:6712 [s1,r1/1:/M{in-ax}] stalling replica due to: boom
[08:42:53][Step 2/2] --- PASS: TestReplicaCorruption (0.12s)
[08:42:53][Step 2/2] === RUN TestChangeReplicasDuplicateError
[08:42:53][Step 2/2] --- PASS: TestChangeReplicasDuplicateError (0.06s)
[08:42:53][Step 2/2] === RUN TestReplicaDanglingMetaIntent
[08:42:53][Step 2/2] === RUN TestReplicaDanglingMetaIntent/reverse=false
[08:42:53][Step 2/2] I181018 08:37:13.578728 26512 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] W181018 08:37:13.579721 27322 storage/intent_resolver.go:675 [s1] failed to push during intent resolution: failed to push "test" id=d7da7d75 key="a" rw=false pri=0.00009085 iso=SERIALIZABLE stat=PENDING epo=0 ts=0.000000123,40 orig=0.000000123,40 max=0.000000124,40 wto=false rop=false seq=1
[08:42:53][Step 2/2] === RUN TestReplicaDanglingMetaIntent/reverse=true
[08:42:53][Step 2/2] I181018 08:37:13.629560 26161 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] W181018 08:37:13.630569 27426 storage/intent_resolver.go:675 [s1] failed to push during intent resolution: failed to push "test" id=a632d09b key="a" rw=false pri=0.02922672 iso=SERIALIZABLE stat=PENDING epo=0 ts=0.000000123,40 orig=0.000000123,40 max=0.000000124,40 wto=false rop=false seq=1
[08:42:53][Step 2/2] --- PASS: TestReplicaDanglingMetaIntent (0.12s)
[08:42:53][Step 2/2] --- PASS: TestReplicaDanglingMetaIntent/reverse=false (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaDanglingMetaIntent/reverse=true (0.05s)
[08:42:53][Step 2/2] === RUN TestReplicaLookupUseReverseScan
[08:42:53][Step 2/2] I181018 08:37:13.719935 27233 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 2 storage.intentResolver: processing intents
[08:42:53][Step 2/2] I181018 08:37:13.720478 27233 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] W181018 08:37:13.723206 27518 storage/intent_resolver.go:675 [s1] failed to resolve intents: result is ambiguous (server shutdown)
[08:42:53][Step 2/2] --- PASS: TestReplicaLookupUseReverseScan (0.09s)
[08:42:53][Step 2/2] === RUN TestRangeLookup
[08:42:53][Step 2/2] --- PASS: TestRangeLookup (0.06s)
[08:42:53][Step 2/2] === RUN TestRequestLeaderEncounterGroupDeleteError
[08:42:53][Step 2/2] --- PASS: TestRequestLeaderEncounterGroupDeleteError (0.06s)
[08:42:53][Step 2/2] === RUN TestIntentIntersect
[08:42:53][Step 2/2] --- PASS: TestIntentIntersect (0.01s)
[08:42:53][Step 2/2] === RUN TestBatchErrorWithIndex
[08:42:53][Step 2/2] --- PASS: TestBatchErrorWithIndex (0.06s)
[08:42:53][Step 2/2] === RUN TestProposalOverhead
[08:42:53][Step 2/2] --- PASS: TestProposalOverhead (0.06s)
[08:42:53][Step 2/2] === RUN TestReplicaLoadSystemConfigSpanIntent
[08:42:53][Step 2/2] --- PASS: TestReplicaLoadSystemConfigSpanIntent (0.12s)
[08:42:53][Step 2/2] === RUN TestReplicaDestroy
[08:42:53][Step 2/2] I181018 08:37:14.153101 27801 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:14.154374 27801 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestReplicaDestroy (0.12s)
[08:42:53][Step 2/2] === RUN TestQuotaPoolAccessOnDestroyedReplica
[08:42:53][Step 2/2] I181018 08:37:14.258602 28068 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:14.259757 28068 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestQuotaPoolAccessOnDestroyedReplica (0.10s)
[08:42:53][Step 2/2] === RUN TestEntries
[08:42:53][Step 2/2] --- PASS: TestEntries (0.19s)
[08:42:53][Step 2/2] === RUN TestTerm
[08:42:53][Step 2/2] --- PASS: TestTerm (0.09s)
[08:42:53][Step 2/2] === RUN TestGCIncorrectRange
[08:42:53][Step 2/2] W181018 08:37:14.651468 28342 util/hlc/hlc.go:312 remote wall time is too far ahead (2ns) to be trustworthy - updating anyway
[08:42:53][Step 2/2] W181018 08:37:14.655840 28343 util/hlc/hlc.go:312 remote wall time is too far ahead (2ns) to be trustworthy - updating anyway
[08:42:53][Step 2/2] --- PASS: TestGCIncorrectRange (0.07s)
[08:42:53][Step 2/2] === RUN TestReplicaCancelRaft
[08:42:53][Step 2/2] --- PASS: TestReplicaCancelRaft (0.11s)
[08:42:53][Step 2/2] === RUN TestReplicaTryAbandon
[08:42:53][Step 2/2] --- PASS: TestReplicaTryAbandon (0.11s)
[08:42:53][Step 2/2] === RUN TestComputeChecksumVersioning
[08:42:53][Step 2/2] I181018 08:37:14.936604 28491 storage/batcheval/cmd_compute_checksum.go:58 incompatible ComputeChecksum versions (server: 3, requested: 4)
[08:42:53][Step 2/2] --- PASS: TestComputeChecksumVersioning (0.05s)
[08:42:53][Step 2/2] === RUN TestNewReplicaCorruptionError
[08:42:53][Step 2/2] --- PASS: TestNewReplicaCorruptionError (0.01s)
[08:42:53][Step 2/2] === RUN TestDiffRange
[08:42:53][Step 2/2] --- PASS: TestDiffRange (0.00s)
[08:42:53][Step 2/2] === RUN TestSyncSnapshot
[08:42:53][Step 2/2] --- PASS: TestSyncSnapshot (0.05s)
[08:42:53][Step 2/2] === RUN TestReplicaIDChangePending
[08:42:53][Step 2/2] --- PASS: TestReplicaIDChangePending (0.05s)
[08:42:53][Step 2/2] === RUN TestSetReplicaID
[08:42:53][Step 2/2] === RUN TestSetReplicaID/#00
[08:42:53][Step 2/2] === RUN TestSetReplicaID/#01
[08:42:53][Step 2/2] === RUN TestSetReplicaID/#02
[08:42:53][Step 2/2] === RUN TestSetReplicaID/#03
[08:42:53][Step 2/2] === RUN TestSetReplicaID/#04
[08:42:53][Step 2/2] --- PASS: TestSetReplicaID (0.06s)
[08:42:53][Step 2/2] --- PASS: TestSetReplicaID/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSetReplicaID/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSetReplicaID/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSetReplicaID/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSetReplicaID/#04 (0.00s)
[08:42:53][Step 2/2] === RUN TestReplicaRetryRaftProposal
[08:42:53][Step 2/2] I181018 08:37:15.160543 28807 storage/replica_test.go:8068 test begins
[08:42:53][Step 2/2] --- PASS: TestReplicaRetryRaftProposal (0.12s)
[08:42:53][Step 2/2] === RUN TestReplicaCancelRaftCommandProgress
[08:42:53][Step 2/2] I181018 08:37:15.276919 28808 storage/replica_test.go:8178 abandoning command 0
[08:42:53][Step 2/2] I181018 08:37:15.277113 28808 storage/replica_test.go:8178 abandoning command 1
[08:42:53][Step 2/2] I181018 08:37:15.278002 28808 storage/replica_test.go:8178 abandoning command 4
[08:42:53][Step 2/2] I181018 08:37:15.278221 28808 storage/replica_test.go:8178 abandoning command 5
[08:42:53][Step 2/2] I181018 08:37:15.278343 28808 storage/replica_test.go:8178 abandoning command 6
[08:42:53][Step 2/2] --- PASS: TestReplicaCancelRaftCommandProgress (0.06s)
[08:42:53][Step 2/2] === RUN TestReplicaBurstPendingCommandsAndRepropose
[08:42:53][Step 2/2] --- PASS: TestReplicaBurstPendingCommandsAndRepropose (0.07s)
[08:42:53][Step 2/2] === RUN TestReplicaLeaseReproposal
[08:42:53][Step 2/2] --- PASS: TestReplicaLeaseReproposal (0.09s)
[08:42:53][Step 2/2] === RUN TestReplicaRefreshPendingCommandsTicks
[08:42:53][Step 2/2] --- PASS: TestReplicaRefreshPendingCommandsTicks (0.06s)
[08:42:53][Step 2/2] === RUN TestAmbiguousResultErrorOnRetry
[08:42:53][Step 2/2] === RUN TestAmbiguousResultErrorOnRetry/non-txn-put
[08:42:53][Step 2/2] === RUN TestAmbiguousResultErrorOnRetry/1PC-txn
[08:42:53][Step 2/2] --- PASS: TestAmbiguousResultErrorOnRetry (0.07s)
[08:42:53][Step 2/2] --- PASS: TestAmbiguousResultErrorOnRetry/non-txn-put (0.01s)
[08:42:53][Step 2/2] --- PASS: TestAmbiguousResultErrorOnRetry/1PC-txn (0.01s)
[08:42:53][Step 2/2] === RUN TestGCWithoutThreshold
[08:42:53][Step 2/2] --- PASS: TestGCWithoutThreshold (0.06s)
[08:42:53][Step 2/2] === RUN TestCommandTimeThreshold
[08:42:53][Step 2/2] W181018 08:37:15.686527 28525 util/hlc/hlc.go:312 remote wall time is too far ahead (3ns) to be trustworthy - updating anyway
[08:42:53][Step 2/2] --- PASS: TestCommandTimeThreshold (0.06s)
[08:42:53][Step 2/2] === RUN TestReplicaTimestampCacheBumpNotLost
[08:42:53][Step 2/2] --- PASS: TestReplicaTimestampCacheBumpNotLost (0.06s)
[08:42:53][Step 2/2] === RUN TestReplicaEvaluationNotTxnMutation
[08:42:53][Step 2/2] --- PASS: TestReplicaEvaluationNotTxnMutation (0.05s)
[08:42:53][Step 2/2] === RUN TestReplicaMetrics
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#00
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#01
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#02
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#03
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#04
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#05
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#06
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#07
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#08
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#09
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#10
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#11
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#12
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#13
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#14
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#15
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#16
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#17
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#18
[08:42:53][Step 2/2] === RUN TestReplicaMetrics/#19
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics (0.07s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#05 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#06 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#07 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#08 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#09 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#10 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#11 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#12 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#13 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#14 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#15 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#16 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#17 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#18 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaMetrics/#19 (0.00s)
[08:42:53][Step 2/2] === RUN TestCancelPendingCommands
[08:42:53][Step 2/2] --- PASS: TestCancelPendingCommands (0.06s)
[08:42:53][Step 2/2] === RUN TestNoopRequestsNotProposed
[08:42:53][Step 2/2] === RUN TestNoopRequestsNotProposed/get_req
[08:42:53][Step 2/2] === RUN TestNoopRequestsNotProposed/put_req
[08:42:53][Step 2/2] === RUN TestNoopRequestsNotProposed/delete_req
[08:42:53][Step 2/2] === RUN TestNoopRequestsNotProposed/get_req_in_txn
[08:42:53][Step 2/2] === RUN TestNoopRequestsNotProposed/put_req_in_txn
[08:42:53][Step 2/2] === RUN TestNoopRequestsNotProposed/delete_req_in_txn
[08:42:53][Step 2/2] === RUN TestNoopRequestsNotProposed/failed_commit_txn_req
[08:42:53][Step 2/2] === RUN TestNoopRequestsNotProposed/push_txn_req
[08:42:53][Step 2/2] === RUN TestNoopRequestsNotProposed/redundant_push_txn_req
[08:42:53][Step 2/2] === RUN TestNoopRequestsNotProposed/resolve_committed_intent_req,_with_intent
[08:42:53][Step 2/2] === RUN TestNoopRequestsNotProposed/resolve_committed_intent_req,_without_intent
[08:42:53][Step 2/2] W181018 08:37:16.451320 30589 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:61aa32fe-279a-4788-80bd-301de957e994 Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:1539851835.957541727,0 Priority:223679 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] === RUN TestNoopRequestsNotProposed/resolve_aborted_intent_req
[08:42:53][Step 2/2] === RUN TestNoopRequestsNotProposed/redundant_resolve_aborted_intent_req
[08:42:53][Step 2/2] --- PASS: TestNoopRequestsNotProposed (0.59s)
[08:42:53][Step 2/2] --- PASS: TestNoopRequestsNotProposed/get_req (0.05s)
[08:42:53][Step 2/2] --- PASS: TestNoopRequestsNotProposed/put_req (0.04s)
[08:42:53][Step 2/2] --- PASS: TestNoopRequestsNotProposed/delete_req (0.04s)
[08:42:53][Step 2/2] --- PASS: TestNoopRequestsNotProposed/get_req_in_txn (0.04s)
[08:42:53][Step 2/2] --- PASS: TestNoopRequestsNotProposed/put_req_in_txn (0.04s)
[08:42:53][Step 2/2] --- PASS: TestNoopRequestsNotProposed/delete_req_in_txn (0.05s)
[08:42:53][Step 2/2] --- PASS: TestNoopRequestsNotProposed/failed_commit_txn_req (0.04s)
[08:42:53][Step 2/2] --- PASS: TestNoopRequestsNotProposed/push_txn_req (0.05s)
[08:42:53][Step 2/2] --- PASS: TestNoopRequestsNotProposed/redundant_push_txn_req (0.05s)
[08:42:53][Step 2/2] --- PASS: TestNoopRequestsNotProposed/resolve_committed_intent_req,_with_intent (0.05s)
[08:42:53][Step 2/2] --- PASS: TestNoopRequestsNotProposed/resolve_committed_intent_req,_without_intent (0.05s)
[08:42:53][Step 2/2] --- PASS: TestNoopRequestsNotProposed/resolve_aborted_intent_req (0.04s)
[08:42:53][Step 2/2] --- PASS: TestNoopRequestsNotProposed/redundant_resolve_aborted_intent_req (0.04s)
[08:42:53][Step 2/2] === RUN TestCommandTooLarge
[08:42:53][Step 2/2] --- PASS: TestCommandTooLarge (0.06s)
[08:42:53][Step 2/2] === RUN TestErrorInRaftApplicationClearsIntents
[08:42:53][Step 2/2] W181018 08:37:16.654021 31306 server/status/runtime.go:295 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
[08:42:53][Step 2/2] I181018 08:37:16.689661 31306 server/server.go:851 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
[08:42:53][Step 2/2] I181018 08:37:16.690111 31306 base/addr_validation.go:279 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
[08:42:53][Step 2/2] I181018 08:37:16.690227 31306 base/addr_validation.go:319 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
[08:42:53][Step 2/2] I181018 08:37:16.693977 31306 server/config.go:493 [n?] 1 storage engine initialized
[08:42:53][Step 2/2] I181018 08:37:16.694154 31306 server/config.go:496 [n?] RocksDB cache size: 128 MiB
[08:42:53][Step 2/2] I181018 08:37:16.694190 31306 server/config.go:496 [n?] store 0: in-memory, size 0 B
[08:42:53][Step 2/2] I181018 08:37:16.742222 31306 server/node.go:371 [n?] **** cluster e302ac4f-34ce-4832-ae12-cf690fd51b1c has been created
[08:42:53][Step 2/2] I181018 08:37:16.742452 31306 server/server.go:1397 [n?] **** add additional nodes by specifying --join=127.0.0.1:40049
[08:42:53][Step 2/2] I181018 08:37:16.744112 31306 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:40049" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:1 patch:0 unstable:1 > build_tag:"v2.2.0-alpha.00000000-1771-g310a049" started_at:1539851836743768525
[08:42:53][Step 2/2] I181018 08:37:16.764716 31306 server/node.go:475 [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=7.1 KiB), ranges=1, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=7221.00 p25=7221.00 p50=7221.00 p75=7221.00 p90=7221.00 pMax=7221.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:53][Step 2/2] I181018 08:37:16.765337 31306 storage/stores.go:242 [n1] read 0 node addresses from persistent storage
[08:42:53][Step 2/2] I181018 08:37:16.766699 31306 server/node.go:698 [n1] connecting to gossip network to verify cluster ID...
[08:42:53][Step 2/2] I181018 08:37:16.767006 31306 server/node.go:723 [n1] node connected via gossip and verified as part of cluster "e302ac4f-34ce-4832-ae12-cf690fd51b1c"
[08:42:53][Step 2/2] I181018 08:37:16.767547 31306 server/node.go:547 [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
[08:42:53][Step 2/2] I181018 08:37:16.768952 31306 server/status/recorder.go:610 [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
[08:42:53][Step 2/2] I181018 08:37:16.769144 31306 server/server.go:1822 [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
[08:42:53][Step 2/2] I181018 08:37:16.771967 31306 server/server.go:1529 [n1] starting https server at 127.0.0.1:35897 (use: 127.0.0.1:35897)
[08:42:53][Step 2/2] I181018 08:37:16.772304 31306 server/server.go:1531 [n1] starting grpc/postgres server at 127.0.0.1:40049
[08:42:53][Step 2/2] I181018 08:37:16.772509 31306 server/server.go:1532 [n1] advertising CockroachDB node at 127.0.0.1:40049
[08:42:53][Step 2/2] W181018 08:37:16.772936 31306 jobs/registry.go:317 [n1] unable to get node liveness: node not in the liveness table
[08:42:53][Step 2/2] I181018 08:37:16.799012 31768 storage/replica_command.go:300 [n1,split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
[08:42:53][Step 2/2] I181018 08:37:16.889449 31735 storage/replica_command.go:300 [n1,split,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
[08:42:53][Step 2/2] I181018 08:37:17.057751 31769 storage/replica_command.go:300 [n1,split,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
[08:42:53][Step 2/2] I181018 08:37:17.082991 31466 sql/event_log.go:126 [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
[08:42:53][Step 2/2] I181018 08:37:17.141134 31794 storage/replica_command.go:300 [n1,split,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
[08:42:53][Step 2/2] I181018 08:37:17.223679 31469 storage/replica_command.go:300 [n1,split,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
[08:42:53][Step 2/2] W181018 08:37:17.235848 31658 storage/intent_resolver.go:675 [n1,s1] failed to push during intent resolution: failed to push "unnamed" id=de082fa6 key=/Table/SystemConfigSpan/Start rw=true pri=0.00038827 iso=SERIALIZABLE stat=PENDING epo=0 ts=1539851837.134090475,0 orig=1539851837.134090475,0 max=1539851837.134090475,0 wto=false rop=false seq=12
[08:42:53][Step 2/2] I181018 08:37:17.288763 30960 sql/event_log.go:126 [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.1-1 User:root}
[08:42:53][Step 2/2] I181018 08:37:17.308498 31661 storage/replica_command.go:300 [n1,split,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
[08:42:53][Step 2/2] I181018 08:37:17.387841 31408 storage/replica_command.go:300 [n1,split,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
[08:42:53][Step 2/2] I181018 08:37:17.416603 31800 sql/event_log.go:126 [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
[08:42:53][Step 2/2] I181018 08:37:17.484481 31761 storage/replica_command.go:300 [n1,split,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
[08:42:53][Step 2/2] I181018 08:37:17.530747 31845 storage/replica_command.go:300 [n1,split,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
[08:42:53][Step 2/2] I181018 08:37:17.577849 31859 sql/event_log.go:126 [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:f89d5287-5cd1-4e2f-aae6-cb13f0460d58 User:root}
[08:42:53][Step 2/2] I181018 08:37:17.588712 31409 storage/replica_command.go:300 [n1,split,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
[08:42:53][Step 2/2] I181018 08:37:17.660362 31876 storage/replica_command.go:300 [n1,split,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
[08:42:53][Step 2/2] I181018 08:37:17.686469 31809 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
[08:42:53][Step 2/2] I181018 08:37:17.736807 31879 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
[08:42:53][Step 2/2] I181018 08:37:17.745845 31896 storage/replica_command.go:300 [n1,split,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
[08:42:53][Step 2/2] I181018 08:37:17.784021 31306 server/server.go:1585 [n1] done ensuring all necessary migrations have run
[08:42:53][Step 2/2] I181018 08:37:17.784288 31306 server/server.go:1588 [n1] serving sql connections
[08:42:53][Step 2/2] I181018 08:37:17.816822 31822 server/server_update.go:68 [n1] no need to upgrade, cluster already at the newest version
[08:42:53][Step 2/2] I181018 08:37:17.829074 31925 storage/replica_command.go:300 [n1,split,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
[08:42:53][Step 2/2] I181018 08:37:17.839185 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.840626 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.844184 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.844980 31825 sql/event_log.go:126 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:40049} Attrs: Locality: ServerVersion:2.1-1 BuildTag:v2.2.0-alpha.00000000-1771-g310a049 StartedAt:1539851836743768525 LocalityAddress:[]} ClusterID:e302ac4f-34ce-4832-ae12-cf690fd51b1c StartedAt:1539851836743768525 LastUp:1539851836743768525}
[08:42:53][Step 2/2] I181018 08:37:17.845649 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.847872 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.849115 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.850162 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.851029 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.851923 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.852725 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.853572 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.854479 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.855424 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.856289 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.857150 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.862309 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.863378 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.864352 31306 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.888691 31903 storage/replica_command.go:300 [n1,split,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
[08:42:53][Step 2/2] I181018 08:37:17.893171 31306 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.897568 31306 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.899225 31306 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.901319 31306 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.904400 31306 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.909677 31306 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.930132 31927 storage/replica_command.go:300 [n1,split,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
[08:42:53][Step 2/2] I181018 08:37:17.937829 31306 server/testserver.go:427 had 15 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.957622 31306 server/testserver.go:427 had 15 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:17.987498 31945 storage/replica_command.go:300 [n1,split,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
[08:42:53][Step 2/2] I181018 08:37:18.000807 31306 server/testserver.go:427 had 16 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:18.037539 32005 storage/replica_command.go:300 [n1,split,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
[08:42:53][Step 2/2] I181018 08:37:18.088685 32006 storage/replica_command.go:300 [n1,split,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
[08:42:53][Step 2/2] I181018 08:37:18.095468 31306 server/testserver.go:427 had 18 ranges at startup, expected 20
[08:42:53][Step 2/2] I181018 08:37:18.133101 31984 storage/replica_command.go:300 [n1,split,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
[08:42:53][Step 2/2] I181018 08:37:18.235758 31306 storage/replica_command.go:300 [n1,s1,r6/1:/{System/tse-Table/System…}] initiating a split of this range at key "b" [r21]
[08:42:53][Step 2/2] I181018 08:37:18.279905 31306 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 2 node.Node: batch
[08:42:53][Step 2/2] 1 storage.replicate: purgatory processing replica
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] 1 kv.DistSender: sending partial batch
[08:42:53][Step 2/2] 1 [async] transport racer
[08:42:53][Step 2/2] 1 [async] storage.merge: processing replica
[08:42:53][Step 2/2] 1 [async] closedts-rangefeed-subscriber
[08:42:53][Step 2/2] I181018 08:37:18.280031 30954 kv/transport_race.go:113 transport race promotion: ran 26 iterations on up to 778 requests
[08:42:53][Step 2/2] I181018 08:37:18.280689 31953 storage/replica_command.go:432 [n1,merge,s1,r6/1:{/System/tse-b}] initiating a merge of r21:{b-/Table/SystemConfigSpan/Start} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:18.281361 31306 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.replicate: purgatory processing replica
[08:42:53][Step 2/2] 1 [async] storage.merge: processing replica
[08:42:53][Step 2/2] 1 [async] closedts-rangefeed-subscriber
[08:42:53][Step 2/2] W181018 08:37:18.282152 31953 internal/client/txn.go:532 [n1,merge,s1,r6/1:{/System/tse-b}] failure aborting transaction: node unavailable; try another peer; abort caused by: node unavailable; try another peer
[08:42:53][Step 2/2] I181018 08:37:18.282206 31306 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] storage.merge: processing replica
[08:42:53][Step 2/2] 1 [async] closedts-rangefeed-subscriber
[08:42:53][Step 2/2] I181018 08:37:18.282636 31306 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] storage.merge: processing replica
[08:42:53][Step 2/2] --- PASS: TestErrorInRaftApplicationClearsIntents (1.69s)
[08:42:53][Step 2/2] === RUN TestProposeWithAsyncConsensus
[08:42:53][Step 2/2] --- PASS: TestProposeWithAsyncConsensus (0.12s)
[08:42:53][Step 2/2] === RUN TestSplitMsgApps
[08:42:53][Step 2/2] === RUN TestSplitMsgApps/[]
[08:42:53][Step 2/2] === RUN TestSplitMsgApps/[{MsgApp:1}]
[08:42:53][Step 2/2] === RUN TestSplitMsgApps/[{MsgApp:1}_{MsgApp:2}]
[08:42:53][Step 2/2] === RUN TestSplitMsgApps/[{MsgApp:2}_{MsgApp:1}]
[08:42:53][Step 2/2] === RUN TestSplitMsgApps/[{MsgVote:1}]
[08:42:53][Step 2/2] === RUN TestSplitMsgApps/[{MsgVote:1}_{MsgVote:2}]
[08:42:53][Step 2/2] === RUN TestSplitMsgApps/[{MsgVote:2}_{MsgVote:1}]
[08:42:53][Step 2/2] === RUN TestSplitMsgApps/[{MsgApp:1}_{MsgVote:2}]
[08:42:53][Step 2/2] === RUN TestSplitMsgApps/[{MsgVote:1}_{MsgApp:2}]
[08:42:53][Step 2/2] === RUN TestSplitMsgApps/[{MsgApp:1}_{MsgVote:2}_{MsgApp:3}]
[08:42:53][Step 2/2] === RUN TestSplitMsgApps/[{MsgVote:1}_{MsgApp:2}_{MsgVote:3}]
[08:42:53][Step 2/2] --- PASS: TestSplitMsgApps (0.01s)
[08:42:53][Step 2/2] --- PASS: TestSplitMsgApps/[] (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSplitMsgApps/[{MsgApp:1}] (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSplitMsgApps/[{MsgApp:1}_{MsgApp:2}] (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSplitMsgApps/[{MsgApp:2}_{MsgApp:1}] (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSplitMsgApps/[{MsgVote:1}] (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSplitMsgApps/[{MsgVote:1}_{MsgVote:2}] (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSplitMsgApps/[{MsgVote:2}_{MsgVote:1}] (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSplitMsgApps/[{MsgApp:1}_{MsgVote:2}] (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSplitMsgApps/[{MsgVote:1}_{MsgApp:2}] (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSplitMsgApps/[{MsgApp:1}_{MsgVote:2}_{MsgApp:3}] (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSplitMsgApps/[{MsgVote:1}_{MsgApp:2}_{MsgVote:3}] (0.00s)
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#00
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#01
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#02
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#03
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#04
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#05
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#06
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#07
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#08
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#09
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#10
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#11
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#12
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#13
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#14
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#15
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#16
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#17
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#18
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#19
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#20
[08:42:53][Step 2/2] === RUN TestShouldReplicaQuiesce/#21
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce (0.02s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#05 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#06 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#07 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#08 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#09 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#10 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#11 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#12 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#13 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#14 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#15 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#16 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#17 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#18 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#19 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#20 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestShouldReplicaQuiesce/#21 (0.00s)
[08:42:53][Step 2/2] === RUN TestReplicaRecomputeStats
[08:42:53][Step 2/2] === RUN TestReplicaRecomputeStats/leftmismatch
[08:42:53][Step 2/2] === RUN TestReplicaRecomputeStats/noop
[08:42:53][Step 2/2] === RUN TestReplicaRecomputeStats/randdelta
[08:42:53][Step 2/2] === RUN TestReplicaRecomputeStats/noopagain
[08:42:53][Step 2/2] --- PASS: TestReplicaRecomputeStats (0.06s)
[08:42:53][Step 2/2] --- PASS: TestReplicaRecomputeStats/leftmismatch (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaRecomputeStats/noop (0.00s)
[08:42:53][Step 2/2] replica_test.go:9898: seed is -3954185732355577081
[08:42:53][Step 2/2] --- PASS: TestReplicaRecomputeStats/randdelta (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaRecomputeStats/noopagain (0.00s)
[08:42:53][Step 2/2] === RUN TestConsistenctQueueErrorFromCheckConsistency
[08:42:53][Step 2/2] E181018 08:37:18.562929 32159 storage/consistency_queue.go:128 storage/replica_test.go:9940: boom
[08:42:53][Step 2/2] E181018 08:37:18.567092 32159 storage/consistency_queue.go:128 storage/replica_test.go:9940: boom
[08:42:53][Step 2/2] --- PASS: TestConsistenctQueueErrorFromCheckConsistency (0.06s)
[08:42:53][Step 2/2] === RUN TestReplicaLocalRetries
[08:42:53][Step 2/2] === RUN TestReplicaLocalRetries/local_retry_of_write_too_old_on_put
[08:42:53][Step 2/2] === RUN TestReplicaLocalRetries/local_retry_of_write_too_old_on_cput
[08:42:53][Step 2/2] === RUN TestReplicaLocalRetries/local_retry_of_write_too_old_on_initput
[08:42:53][Step 2/2] === RUN TestReplicaLocalRetries/serializable_push_without_retry
[08:42:53][Step 2/2] === RUN TestReplicaLocalRetries/no_local_retry_of_write_too_old_on_non-1PC_txn
[08:42:53][Step 2/2] === RUN TestReplicaLocalRetries/no_local_retry_of_write_too_old_on_non-1PC_txn_initput
[08:42:53][Step 2/2] === RUN TestReplicaLocalRetries/no_local_retry_of_write_too_old_on_1PC_txn_and_refresh_spans
[08:42:53][Step 2/2] === RUN TestReplicaLocalRetries/local_retry_of_write_too_old_on_1PC_txn
[08:42:53][Step 2/2] === RUN TestReplicaLocalRetries/local_retry_with_multiple_write_too_old_errors
[08:42:53][Step 2/2] === RUN TestReplicaLocalRetries/local_retry_with_multiple_write_too_old_errors#01
[08:42:53][Step 2/2] === RUN TestReplicaLocalRetries/serializable_commit_with_forwarded_timestamp
[08:42:53][Step 2/2] === RUN TestReplicaLocalRetries/serializable_commit_with_forwarded_timestamp_on_1PC_txn
[08:42:53][Step 2/2] === RUN TestReplicaLocalRetries/serializable_commit_with_write-too-old_flag
[08:42:53][Step 2/2] --- PASS: TestReplicaLocalRetries (0.18s)
[08:42:53][Step 2/2] --- PASS: TestReplicaLocalRetries/local_retry_of_write_too_old_on_put (0.01s)
[08:42:53][Step 2/2] --- PASS: TestReplicaLocalRetries/local_retry_of_write_too_old_on_cput (0.01s)
[08:42:53][Step 2/2] --- PASS: TestReplicaLocalRetries/local_retry_of_write_too_old_on_initput (0.01s)
[08:42:53][Step 2/2] --- PASS: TestReplicaLocalRetries/serializable_push_without_retry (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaLocalRetries/no_local_retry_of_write_too_old_on_non-1PC_txn (0.01s)
[08:42:53][Step 2/2] --- PASS: TestReplicaLocalRetries/no_local_retry_of_write_too_old_on_non-1PC_txn_initput (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaLocalRetries/no_local_retry_of_write_too_old_on_1PC_txn_and_refresh_spans (0.01s)
[08:42:53][Step 2/2] --- PASS: TestReplicaLocalRetries/local_retry_of_write_too_old_on_1PC_txn (0.01s)
[08:42:53][Step 2/2] --- PASS: TestReplicaLocalRetries/local_retry_with_multiple_write_too_old_errors (0.02s)
[08:42:53][Step 2/2] --- PASS: TestReplicaLocalRetries/local_retry_with_multiple_write_too_old_errors#01 (0.02s)
[08:42:53][Step 2/2] --- PASS: TestReplicaLocalRetries/serializable_commit_with_forwarded_timestamp (0.01s)
[08:42:53][Step 2/2] --- PASS: TestReplicaLocalRetries/serializable_commit_with_forwarded_timestamp_on_1PC_txn (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaLocalRetries/serializable_commit_with_write-too-old_flag (0.01s)
[08:42:53][Step 2/2] === RUN TestReplicaPushed1PC
[08:42:53][Step 2/2] === RUN TestReplicaPushed1PC/SERIALIZABLE
[08:42:53][Step 2/2] === RUN TestReplicaPushed1PC/SNAPSHOT
[08:42:53][Step 2/2] --- PASS: TestReplicaPushed1PC (0.05s)
[08:42:53][Step 2/2] --- PASS: TestReplicaPushed1PC/SERIALIZABLE (0.00s)
[08:42:53][Step 2/2] --- PASS: TestReplicaPushed1PC/SNAPSHOT (0.00s)
[08:42:53][Step 2/2] === RUN TestReplicaBootstrapRangeAppliedStateKey
[08:42:53][Step 2/2] === RUN TestReplicaBootstrapRangeAppliedStateKey/version=2.0
[08:42:53][Step 2/2] === RUN TestReplicaBootstrapRangeAppliedStateKey/version=2.0-3
[08:42:53][Step 2/2] === RUN TestReplicaBootstrapRangeAppliedStateKey/version=2.1-1
[08:42:53][Step 2/2] --- PASS: TestReplicaBootstrapRangeAppliedStateKey (0.16s)
[08:42:53][Step 2/2] --- PASS: TestReplicaBootstrapRangeAppliedStateKey/version=2.0 (0.05s)
[08:42:53][Step 2/2] --- PASS: TestReplicaBootstrapRangeAppliedStateKey/version=2.0-3 (0.05s)
[08:42:53][Step 2/2] --- PASS: TestReplicaBootstrapRangeAppliedStateKey/version=2.1-1 (0.04s)
[08:42:53][Step 2/2] === RUN TestReplicaMigrateRangeAppliedStateKey
[08:42:53][Step 2/2] --- PASS: TestReplicaMigrateRangeAppliedStateKey (0.09s)
[08:42:53][Step 2/2] === RUN TestReplicaShouldCampaignOnWake
[08:42:53][Step 2/2] --- PASS: TestReplicaShouldCampaignOnWake (0.01s)
[08:42:53][Step 2/2] === RUN TestRangeStatsRequest
[08:42:53][Step 2/2] --- PASS: TestRangeStatsRequest (0.35s)
[08:42:53][Step 2/2] === RUN TestRollbackMissingTxnRecordNoError
[08:42:53][Step 2/2] --- PASS: TestRollbackMissingTxnRecordNoError (0.06s)
[08:42:53][Step 2/2] === RUN TestScannerAddToQueues
[08:42:53][Step 2/2] --- PASS: TestScannerAddToQueues (0.01s)
[08:42:53][Step 2/2] === RUN TestScannerTiming
[08:42:53][Step 2/2] I181018 08:37:19.596290 32698 storage/scanner_test.go:265 0: average scan: 15.42652ms
[08:42:53][Step 2/2] I181018 08:37:19.697342 32698 storage/scanner_test.go:265 1: average scan: 25.491335ms
[08:42:53][Step 2/2] --- PASS: TestScannerTiming (0.21s)
[08:42:53][Step 2/2] === RUN TestScannerPaceInterval
[08:42:53][Step 2/2] --- PASS: TestScannerPaceInterval (0.01s)
[08:42:53][Step 2/2] === RUN TestScannerMinMaxIdleTime
[08:42:53][Step 2/2] --- PASS: TestScannerMinMaxIdleTime (0.00s)
[08:42:53][Step 2/2] === RUN TestScannerDisabled
[08:42:53][Step 2/2] --- PASS: TestScannerDisabled (0.01s)
[08:42:53][Step 2/2] === RUN TestScannerDisabledWithZeroInterval
[08:42:53][Step 2/2] --- PASS: TestScannerDisabledWithZeroInterval (0.00s)
[08:42:53][Step 2/2] === RUN TestScannerEmptyRangeSet
[08:42:53][Step 2/2] --- PASS: TestScannerEmptyRangeSet (0.01s)
[08:42:53][Step 2/2] === RUN TestRangeIDChunk
[08:42:53][Step 2/2] --- PASS: TestRangeIDChunk (0.00s)
[08:42:53][Step 2/2] === RUN TestRangeIDQueue
[08:42:53][Step 2/2] --- PASS: TestRangeIDQueue (0.01s)
[08:42:53][Step 2/2] === RUN TestSchedulerLoop
[08:42:53][Step 2/2] --- PASS: TestSchedulerLoop (0.00s)
[08:42:53][Step 2/2] === RUN TestSchedulerBuffering
[08:42:53][Step 2/2] --- PASS: TestSchedulerBuffering (0.01s)
[08:42:53][Step 2/2] === RUN TestSplitQueueShouldQueue
[08:42:53][Step 2/2] --- PASS: TestSplitQueueShouldQueue (0.06s)
[08:42:53][Step 2/2] === RUN TestRangeStatsEmpty
[08:42:53][Step 2/2] --- PASS: TestRangeStatsEmpty (0.09s)
[08:42:53][Step 2/2] === RUN TestRangeStatsInit
[08:42:53][Step 2/2] --- PASS: TestRangeStatsInit (0.06s)
[08:42:53][Step 2/2] === RUN TestStorePoolGossipUpdate
[08:42:53][Step 2/2] --- PASS: TestStorePoolGossipUpdate (0.02s)
[08:42:53][Step 2/2] === RUN TestStorePoolGetStoreList
[08:42:53][Step 2/2] --- PASS: TestStorePoolGetStoreList (0.01s)
[08:42:53][Step 2/2] === RUN TestStoreListFilter
[08:42:53][Step 2/2] --- PASS: TestStoreListFilter (0.01s)
[08:42:53][Step 2/2] === RUN TestStorePoolUpdateLocalStore
[08:42:53][Step 2/2] --- PASS: TestStorePoolUpdateLocalStore (0.01s)
[08:42:53][Step 2/2] === RUN TestStorePoolUpdateLocalStoreBeforeGossip
[08:42:53][Step 2/2] --- PASS: TestStorePoolUpdateLocalStoreBeforeGossip (0.02s)
[08:42:53][Step 2/2] === RUN TestStorePoolGetStoreDetails
[08:42:53][Step 2/2] --- PASS: TestStorePoolGetStoreDetails (0.01s)
[08:42:53][Step 2/2] === RUN TestStorePoolFindDeadReplicas
[08:42:53][Step 2/2] --- PASS: TestStorePoolFindDeadReplicas (0.01s)
[08:42:53][Step 2/2] === RUN TestStorePoolDefaultState
[08:42:53][Step 2/2] --- PASS: TestStorePoolDefaultState (0.01s)
[08:42:53][Step 2/2] === RUN TestStorePoolThrottle
[08:42:53][Step 2/2] --- PASS: TestStorePoolThrottle (0.01s)
[08:42:53][Step 2/2] === RUN TestGetLocalities
[08:42:53][Step 2/2] --- PASS: TestGetLocalities (0.02s)
[08:42:53][Step 2/2] === RUN TestStorePoolDecommissioningReplicas
[08:42:53][Step 2/2] --- PASS: TestStorePoolDecommissioningReplicas (0.02s)
[08:42:53][Step 2/2] === RUN TestChooseLeaseToTransfer
[08:42:53][Step 2/2] --- PASS: TestChooseLeaseToTransfer (0.04s)
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#00
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#01
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#02
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#03
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#04
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#05
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#06
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#07
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#08
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#09
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#10
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#11
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#12
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#13
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#14
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#15
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#16
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#17
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#18
[08:42:53][Step 2/2] === RUN TestChooseReplicaToRebalance/#19
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance (0.04s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#00 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#01 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#02 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#03 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#04 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#05 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#06 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#07 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#08 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#09 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#10 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#11 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#12 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#13 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#14 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#15 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#16 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#17 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#18 (0.00s)
[08:42:53][Step 2/2] --- PASS: TestChooseReplicaToRebalance/#19 (0.00s)
[08:42:53][Step 2/2] === RUN TestNoLeaseTransferToBehindReplicas
[08:42:53][Step 2/2] --- PASS: TestNoLeaseTransferToBehindReplicas (0.03s)
[08:42:53][Step 2/2] === RUN TestSnapshotRaftLogLimit
[08:42:53][Step 2/2] === RUN TestSnapshotRaftLogLimit/preemptive
[08:42:53][Step 2/2] === RUN TestSnapshotRaftLogLimit/Raft
[08:42:53][Step 2/2] --- PASS: TestSnapshotRaftLogLimit (9.84s)
[08:42:53][Step 2/2] --- PASS: TestSnapshotRaftLogLimit/preemptive (0.26s)
[08:42:53][Step 2/2] --- PASS: TestSnapshotRaftLogLimit/Raft (0.28s)
[08:42:53][Step 2/2] === RUN TestIterateIDPrefixKeys
[08:42:53][Step 2/2] --- PASS: TestIterateIDPrefixKeys (0.01s)
[08:42:53][Step 2/2] store_test.go:240: seed is -8676006353366013259
[08:42:53][Step 2/2] store_test.go:268: writing op=0 rangeID=28
[08:42:53][Step 2/2] store_test.go:268: writing op=2 rangeID=28
[08:42:53][Step 2/2] store_test.go:268: writing op=1 rangeID=28
[08:42:53][Step 2/2] store_test.go:268: writing op=3 rangeID=28
[08:42:53][Step 2/2] store_test.go:268: writing op=2 rangeID=29
[08:42:53][Step 2/2] store_test.go:268: writing op=0 rangeID=29
[08:42:53][Step 2/2] store_test.go:268: writing op=1 rangeID=29
[08:42:53][Step 2/2] store_test.go:268: writing op=2 rangeID=78
[08:42:53][Step 2/2] store_test.go:268: writing op=3 rangeID=78
[08:42:53][Step 2/2] store_test.go:268: writing op=1 rangeID=78
[08:42:53][Step 2/2] store_test.go:268: writing op=3 rangeID=4
[08:42:53][Step 2/2] store_test.go:268: writing op=2 rangeID=4
[08:42:53][Step 2/2] store_test.go:268: writing op=2 rangeID=51
[08:42:53][Step 2/2] store_test.go:268: writing op=1 rangeID=51
[08:42:53][Step 2/2] store_test.go:268: writing op=3 rangeID=51
[08:42:53][Step 2/2] store_test.go:268: writing op=0 rangeID=51
[08:42:53][Step 2/2] store_test.go:268: writing op=2 rangeID=46
[08:42:53][Step 2/2] store_test.go:268: writing op=3 rangeID=46
[08:42:53][Step 2/2] store_test.go:268: writing op=1 rangeID=46
[08:42:53][Step 2/2] store_test.go:268: writing op=0 rangeID=46
[08:42:53][Step 2/2] store_test.go:268: writing op=1 rangeID=43
[08:42:53][Step 2/2] store_test.go:268: writing op=2 rangeID=43
[08:42:53][Step 2/2] store_test.go:268: writing op=0 rangeID=43
[08:42:53][Step 2/2] store_test.go:268: writing op=3 rangeID=43
[08:42:53][Step 2/2] store_test.go:268: writing op=0 rangeID=88
[08:42:53][Step 2/2] store_test.go:268: writing op=3 rangeID=88
[08:42:53][Step 2/2] store_test.go:268: writing op=2 rangeID=88
[08:42:53][Step 2/2] store_test.go:268: writing op=3 rangeID=13
[08:42:53][Step 2/2] store_test.go:268: writing op=0 rangeID=13
[08:42:53][Step 2/2] store_test.go:268: writing op=3 rangeID=69
[08:42:53][Step 2/2] store_test.go:268: writing op=2 rangeID=69
[08:42:53][Step 2/2] store_test.go:268: writing op=1 rangeID=69
[08:42:53][Step 2/2] store_test.go:306: writing tombstone at rangeID=15
[08:42:53][Step 2/2] store_test.go:306: writing tombstone at rangeID=29
[08:42:53][Step 2/2] store_test.go:306: writing tombstone at rangeID=27
[08:42:53][Step 2/2] store_test.go:306: writing tombstone at rangeID=35
[08:42:53][Step 2/2] store_test.go:306: writing tombstone at rangeID=14
[08:42:53][Step 2/2] store_test.go:306: writing tombstone at rangeID=75
[08:42:53][Step 2/2] store_test.go:306: writing tombstone at rangeID=56
[08:42:53][Step 2/2] store_test.go:306: writing tombstone at rangeID=81
[08:42:53][Step 2/2] store_test.go:306: writing tombstone at rangeID=55
[08:42:53][Step 2/2] store_test.go:306: writing tombstone at rangeID=10
[08:42:53][Step 2/2] === RUN TestStoreInitAndBootstrap
[08:42:53][Step 2/2] I181018 08:37:30.133896 33450 storage/migrations.go:146 [s1] found 1 replicas with abandoned raft entries in 50.669µs
[08:42:53][Step 2/2] --- PASS: TestStoreInitAndBootstrap (0.04s)
[08:42:53][Step 2/2] === RUN TestBootstrapOfNonEmptyStore
[08:42:53][Step 2/2] --- PASS: TestBootstrapOfNonEmptyStore (0.01s)
[08:42:53][Step 2/2] === RUN TestStoreAddRemoveRanges
[08:42:53][Step 2/2] I181018 08:37:30.226990 33580 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:30.228175 33580 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestStoreAddRemoveRanges (0.12s)
[08:42:53][Step 2/2] === RUN TestReplicasByKey
[08:42:53][Step 2/2] --- PASS: TestReplicasByKey (0.07s)
[08:42:53][Step 2/2] === RUN TestStoreRemoveReplicaOldDescriptor
[08:42:53][Step 2/2] I181018 08:37:30.403740 33847 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:30.404566 33847 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestStoreRemoveReplicaOldDescriptor (0.06s)
[08:42:53][Step 2/2] === RUN TestStoreRemoveReplicaDestroy
[08:42:53][Step 2/2] I181018 08:37:30.465948 33831 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:30.467132 33831 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestStoreRemoveReplicaDestroy (0.05s)
[08:42:53][Step 2/2] === RUN TestStoreReplicaVisitor
[08:42:53][Step 2/2] I181018 08:37:30.524603 34038 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:30.525402 34038 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestStoreReplicaVisitor (0.06s)
[08:42:53][Step 2/2] === RUN TestHasOverlappingReplica
[08:42:53][Step 2/2] I181018 08:37:30.578553 32681 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:30.579360 32681 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestHasOverlappingReplica (0.05s)
[08:42:53][Step 2/2] === RUN TestLookupPrecedingReplica
[08:42:53][Step 2/2] I181018 08:37:30.628630 34026 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:30.629683 34026 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestLookupPrecedingReplica (0.10s)
[08:42:53][Step 2/2] === RUN TestMaybeMarkReplicaInitialized
[08:42:53][Step 2/2] I181018 08:37:30.733569 33487 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:30.734787 33487 storage/replica.go:863 removed 8 (3+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestMaybeMarkReplicaInitialized (0.05s)
[08:42:53][Step 2/2] === RUN TestStoreSend
[08:42:53][Step 2/2] --- PASS: TestStoreSend (0.06s)
[08:42:53][Step 2/2] === RUN TestStoreObservedTimestamp
[08:42:53][Step 2/2] I181018 08:37:30.846292 34487 storage/replica_command.go:75 [s1,r1/1:/M{in-ax}] test injecting error: boom
[08:42:53][Step 2/2] --- PASS: TestStoreObservedTimestamp (0.11s)
[08:42:53][Step 2/2] === RUN TestStoreAnnotateNow
[08:42:53][Step 2/2] I181018 08:37:30.959383 34138 storage/replica_command.go:75 [s1,r1/1:/M{in-ax}] test injecting error: boom
[08:42:53][Step 2/2] I181018 08:37:31.037708 34138 storage/replica_command.go:75 [s1,r1/1:/M{in-ax}] test injecting error: boom
[08:42:53][Step 2/2] --- PASS: TestStoreAnnotateNow (0.17s)
[08:42:53][Step 2/2] === RUN TestStoreVerifyKeys
[08:42:53][Step 2/2] --- PASS: TestStoreVerifyKeys (0.06s)
[08:42:53][Step 2/2] === RUN TestStoreSendUpdateTime
[08:42:53][Step 2/2] --- PASS: TestStoreSendUpdateTime (0.06s)
[08:42:53][Step 2/2] === RUN TestStoreSendWithZeroTime
[08:42:53][Step 2/2] --- PASS: TestStoreSendWithZeroTime (0.05s)
[08:42:53][Step 2/2] === RUN TestStoreSendWithClockOffset
[08:42:53][Step 2/2] --- PASS: TestStoreSendWithClockOffset (0.05s)
[08:42:53][Step 2/2] === RUN TestStoreSendBadRange
[08:42:53][Step 2/2] --- PASS: TestStoreSendBadRange (0.04s)
[08:42:53][Step 2/2] === RUN TestStoreSendOutOfRange
[08:42:53][Step 2/2] --- PASS: TestStoreSendOutOfRange (0.05s)
[08:42:53][Step 2/2] === RUN TestStoreRangeIDAllocation
[08:42:53][Step 2/2] --- PASS: TestStoreRangeIDAllocation (0.06s)
[08:42:53][Step 2/2] === RUN TestStoreReplicasByKey
[08:42:53][Step 2/2] --- PASS: TestStoreReplicasByKey (0.06s)
[08:42:53][Step 2/2] === RUN TestStoreSetRangesMaxBytes
[08:42:53][Step 2/2] --- PASS: TestStoreSetRangesMaxBytes (0.05s)
[08:42:53][Step 2/2] === RUN TestStoreResolveWriteIntent
[08:42:53][Step 2/2] --- PASS: TestStoreResolveWriteIntent (0.15s)
[08:42:53][Step 2/2] === RUN TestStoreResolveWriteIntentRollback
[08:42:53][Step 2/2] --- PASS: TestStoreResolveWriteIntentRollback (0.08s)
[08:42:53][Step 2/2] === RUN TestStoreResolveWriteIntentPushOnRead
[08:42:53][Step 2/2] --- PASS: TestStoreResolveWriteIntentPushOnRead (0.11s)
[08:42:53][Step 2/2] === RUN TestStoreResolveWriteIntentSnapshotIsolation
[08:42:53][Step 2/2] --- PASS: TestStoreResolveWriteIntentSnapshotIsolation (0.06s)
[08:42:53][Step 2/2] === RUN TestStoreResolveWriteIntentNoTxn
[08:42:53][Step 2/2] --- PASS: TestStoreResolveWriteIntentNoTxn (0.08s)
[08:42:53][Step 2/2] === RUN TestStoreReadInconsistent
[08:42:53][Step 2/2] === RUN TestStoreReadInconsistent/READ_UNCOMMITTED
[08:42:53][Step 2/2] W181018 08:37:32.094247 35982 storage/intent_resolver.go:675 [s1] failed to push during intent resolution: failed to push "testA" id=ab597220 key="true-a" rw=false pri=0.00000005 iso=SERIALIZABLE stat=PENDING epo=0 ts=0.000000123,43 orig=0.000000123,43 max=0.000000123,45 wto=false rop=false seq=1
[08:42:53][Step 2/2] I181018 08:37:32.122404 35032 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] === RUN TestStoreReadInconsistent/INCONSISTENT
[08:42:53][Step 2/2] W181018 08:37:32.183511 36413 storage/intent_resolver.go:675 [s1] failed to push during intent resolution: failed to push "testA" id=9e66cc0a key="true-a" rw=false pri=0.00000005 iso=SERIALIZABLE stat=PENDING epo=0 ts=0.000000123,43 orig=0.000000123,43 max=0.000000123,45 wto=false rop=false seq=1
[08:42:53][Step 2/2] I181018 08:37:32.209828 36236 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] --- PASS: TestStoreReadInconsistent (0.18s)
[08:42:53][Step 2/2] --- PASS: TestStoreReadInconsistent/READ_UNCOMMITTED (0.08s)
[08:42:53][Step 2/2] --- PASS: TestStoreReadInconsistent/INCONSISTENT (0.09s)
[08:42:53][Step 2/2] === RUN TestStoreScanResumeTSCache
[08:42:53][Step 2/2] --- PASS: TestStoreScanResumeTSCache (0.07s)
[08:42:53][Step 2/2] === RUN TestStoreScanIntents
[08:42:53][Step 2/2] W181018 08:37:32.456423 36533 storage/intent_resolver.go:675 [s1] failed to push during intent resolution: failed to push "test-2" id=6b31e10d key="TestStoreScanIntents2-00" rw=false pri=0.00000000 iso=SERIALIZABLE stat=PENDING epo=0 ts=1539851852.419189727,0 orig=1539851852.419189727,0 max=1539851852.419189728,0 wto=false rop=false seq=1
[08:42:53][Step 2/2] I181018 08:37:32.483426 36659 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] --- PASS: TestStoreScanIntents (0.20s)
[08:42:53][Step 2/2] === RUN TestStoreScanInconsistentResolvesIntents
[08:42:53][Step 2/2] I181018 08:37:32.612295 36639 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 2 storage.intentResolver: processing intents
[08:42:53][Step 2/2] W181018 08:37:32.612803 36843 storage/intent_resolver.go:675 [s1] failed to resolve intents: node unavailable; try another peer
[08:42:53][Step 2/2] I181018 08:37:32.613086 36639 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] --- PASS: TestStoreScanInconsistentResolvesIntents (0.19s)
[08:42:53][Step 2/2] === RUN TestStoreScanIntentsFromTwoTxns
[08:42:53][Step 2/2] --- PASS: TestStoreScanIntentsFromTwoTxns (0.08s)
[08:42:53][Step 2/2] === RUN TestStoreScanMultipleIntents
[08:42:53][Step 2/2] --- PASS: TestStoreScanMultipleIntents (0.07s)
[08:42:53][Step 2/2] === RUN TestStoreBadRequests
[08:42:53][Step 2/2] --- PASS: TestStoreBadRequests (0.11s)
[08:42:53][Step 2/2] === RUN TestMaybeRemove
[08:42:53][Step 2/2] I181018 08:37:32.996108 37096 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:32.996986 37096 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestMaybeRemove (0.06s)
[08:42:53][Step 2/2] === RUN TestStoreGCThreshold
[08:42:53][Step 2/2] --- PASS: TestStoreGCThreshold (0.11s)
[08:42:53][Step 2/2] === RUN TestStoreRangePlaceholders
[08:42:53][Step 2/2] I181018 08:37:33.168533 36543 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:33.169599 36543 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestStoreRangePlaceholders (0.12s)
[08:42:53][Step 2/2] === RUN TestStoreRemovePlaceholderOnError
[08:42:53][Step 2/2] I181018 08:37:33.274910 37364 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:33.276249 37364 storage/replica.go:863 removed 8 (3+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestStoreRemovePlaceholderOnError (0.06s)
[08:42:53][Step 2/2] === RUN TestStoreRemovePlaceholderOnRaftIgnored
[08:42:53][Step 2/2] I181018 08:37:33.349657 37493 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:33.350983 37493 storage/replica.go:863 removed 8 (3+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] W181018 08:37:33.354280 37493 storage/replica.go:5048 [s1,r1/2:{-}] failed to look up recipient replica 0 in r1 while sending MsgAppResp: replica 0 not present in (n2,s2):2, []
[08:42:53][Step 2/2] --- PASS: TestStoreRemovePlaceholderOnRaftIgnored (0.26s)
[08:42:53][Step 2/2] === RUN TestRemovedReplicaTombstone
[08:42:53][Step 2/2] === RUN TestRemovedReplicaTombstone/#00
[08:42:53][Step 2/2] I181018 08:37:33.606449 37667 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:33.607489 37667 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] === RUN TestRemovedReplicaTombstone/#01
[08:42:53][Step 2/2] I181018 08:37:33.671360 37110 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:33.672336 37110 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] === RUN TestRemovedReplicaTombstone/#02
[08:42:53][Step 2/2] I181018 08:37:33.727928 37120 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:33.729151 37120 storage/replica.go:863 removed 8 (3+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] === RUN TestRemovedReplicaTombstone/#03
[08:42:53][Step 2/2] I181018 08:37:33.794305 37940 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:33.795413 37940 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] === RUN TestRemovedReplicaTombstone/#04
[08:42:53][Step 2/2] I181018 08:37:33.852945 38034 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:33.854173 38034 storage/replica.go:863 removed 8 (3+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] === RUN TestRemovedReplicaTombstone/#05
[08:42:53][Step 2/2] I181018 08:37:33.909770 38141 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:33.911009 38141 storage/replica.go:863 removed 8 (3+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] === RUN TestRemovedReplicaTombstone/#06
[08:42:53][Step 2/2] I181018 08:37:33.961269 38118 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:33.962486 38118 storage/replica.go:863 removed 8 (3+5) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] === RUN TestRemovedReplicaTombstone/#07
[08:42:53][Step 2/2] I181018 08:37:34.018818 37668 storage/store.go:2580 removing replica r1/1
[08:42:53][Step 2/2] I181018 08:37:34.020327 37668 storage/replica.go:863 removed 8 (3+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] === RUN TestRemovedReplicaTombstone/#08
[08:42:53][Step 2/2] === RUN TestRemovedReplicaTombstone/#09
[08:42:53][Step 2/2] === RUN TestRemovedReplicaTombstone/#10
[08:42:53][Step 2/2] === RUN TestRemovedReplicaTombstone/#11
[08:42:53][Step 2/2] === RUN TestRemovedReplicaTombstone/#12
[08:42:53][Step 2/2] --- PASS: TestRemovedReplicaTombstone (0.72s)
[08:42:53][Step 2/2] --- PASS: TestRemovedReplicaTombstone/#00 (0.04s)
[08:42:53][Step 2/2] --- PASS: TestRemovedReplicaTombstone/#01 (0.07s)
[08:42:53][Step 2/2] --- PASS: TestRemovedReplicaTombstone/#02 (0.07s)
[08:42:53][Step 2/2] --- PASS: TestRemovedReplicaTombstone/#03 (0.05s)
[08:42:53][Step 2/2] --- PASS: TestRemovedReplicaTombstone/#04 (0.06s)
[08:42:53][Step 2/2] --- PASS: TestRemovedReplicaTombstone/#05 (0.06s)
[08:42:53][Step 2/2] --- PASS: TestRemovedReplicaTombstone/#06 (0.05s)
[08:42:53][Step 2/2] --- PASS: TestRemovedReplicaTombstone/#07 (0.05s)
[08:42:53][Step 2/2] --- PASS: TestRemovedReplicaTombstone/#08 (0.05s)
[08:42:53][Step 2/2] --- PASS: TestRemovedReplicaTombstone/#09 (0.05s)
[08:42:53][Step 2/2] --- PASS: TestRemovedReplicaTombstone/#10 (0.05s)
[08:42:53][Step 2/2] --- PASS: TestRemovedReplicaTombstone/#11 (0.05s)
[08:42:53][Step 2/2] --- PASS: TestRemovedReplicaTombstone/#12 (0.05s)
[08:42:53][Step 2/2] === RUN TestSendSnapshotThrottling
[08:42:53][Step 2/2] --- PASS: TestSendSnapshotThrottling (0.01s)
[08:42:53][Step 2/2] === RUN TestReserveSnapshotThrottling
[08:42:53][Step 2/2] --- PASS: TestReserveSnapshotThrottling (0.07s)
[08:42:53][Step 2/2] === RUN TestReserveSnapshotFullnessLimit
[08:42:53][Step 2/2] --- PASS: TestReserveSnapshotFullnessLimit (0.06s)
[08:42:53][Step 2/2] === RUN TestSnapshotRateLimit
[08:42:53][Step 2/2] === RUN TestSnapshotRateLimit/UNKNOWN
[08:42:53][Step 2/2] === RUN TestSnapshotRateLimit/RECOVERY
[08:42:53][Step 2/2] === RUN TestSnapshotRateLimit/REBALANCE
[08:42:53][Step 2/2] --- PASS: TestSnapshotRateLimit (0.01s)
[08:42:53][Step 2/2] --- PASS: TestSnapshotRateLimit/UNKNOWN (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSnapshotRateLimit/RECOVERY (0.00s)
[08:42:53][Step 2/2] --- PASS: TestSnapshotRateLimit/REBALANCE (0.00s)
[08:42:53][Step 2/2] === RUN TestStoresAddStore
[08:42:53][Step 2/2] --- PASS: TestStoresAddStore (0.01s)
[08:42:53][Step 2/2] === RUN TestStoresRemoveStore
[08:42:53][Step 2/2] --- PASS: TestStoresRemoveStore (0.00s)
[08:42:53][Step 2/2] === RUN TestStoresGetStoreCount
[08:42:53][Step 2/2] --- PASS: TestStoresGetStoreCount (0.01s)
[08:42:53][Step 2/2] === RUN TestStoresVisitStores
[08:42:53][Step 2/2] --- PASS: TestStoresVisitStores (0.00s)
[08:42:53][Step 2/2] === RUN TestStoresGetReplicaForRangeID
[08:42:53][Step 2/2] --- PASS: TestStoresGetReplicaForRangeID (0.09s)
[08:42:53][Step 2/2] === RUN TestStoresGetStore
[08:42:53][Step 2/2] --- PASS: TestStoresGetStore (0.02s)
[08:42:53][Step 2/2] === RUN TestStoresGossipStorage
[08:42:53][Step 2/2] I181018 08:37:34.585135 38983 storage/stores.go:242 read 0 node addresses from persistent storage
[08:42:53][Step 2/2] I181018 08:37:34.585955 38983 storage/stores.go:261 wrote 1 node addresses to persistent storage
[08:42:53][Step 2/2] I181018 08:37:34.586057 38983 storage/stores.go:242 read 1 node addresses from persistent storage
[08:42:53][Step 2/2] I181018 08:37:34.586774 38983 storage/stores.go:242 read 1 node addresses from persistent storage
[08:42:53][Step 2/2] --- PASS: TestStoresGossipStorage (0.02s)
[08:42:53][Step 2/2] === RUN TestStoresGossipStorageReadLatest
[08:42:53][Step 2/2] I181018 08:37:34.608603 38880 storage/stores.go:261 wrote 1 node addresses to persistent storage
[08:42:53][Step 2/2] I181018 08:37:34.609178 38880 storage/stores.go:261 wrote 2 node addresses to persistent storage
[08:42:53][Step 2/2] I181018 08:37:34.609355 38880 storage/stores.go:242 read 2 node addresses from persistent storage
[08:42:53][Step 2/2] I181018 08:37:34.609854 38880 storage/stores.go:242 read 2 node addresses from persistent storage
[08:42:53][Step 2/2] --- PASS: TestStoresGossipStorageReadLatest (0.02s)
[08:42:53][Step 2/2] === RUN TestStoresClusterVersionWriteSynthesize
[08:42:53][Step 2/2] --- PASS: TestStoresClusterVersionWriteSynthesize (0.03s)
[08:42:53][Step 2/2] === RUN TestStoresClusterVersionIncompatible
[08:42:53][Step 2/2] === RUN TestStoresClusterVersionIncompatible/StoreTooNewUseVersion
[08:42:53][Step 2/2] === RUN TestStoresClusterVersionIncompatible/StoreTooNewMinVersion
[08:42:53][Step 2/2] === RUN TestStoresClusterVersionIncompatible/StoreTooOldUseVersion
[08:42:53][Step 2/2] === RUN TestStoresClusterVersionIncompatible/StoreTooOldMinVersion
[08:42:53][Step 2/2] --- PASS: TestStoresClusterVersionIncompatible (0.04s)
[08:42:53][Step 2/2] --- PASS: TestStoresClusterVersionIncompatible/StoreTooNewUseVersion (0.01s)
[08:42:53][Step 2/2] --- PASS: TestStoresClusterVersionIncompatible/StoreTooNewMinVersion (0.01s)
[08:42:53][Step 2/2] --- PASS: TestStoresClusterVersionIncompatible/StoreTooOldUseVersion (0.01s)
[08:42:53][Step 2/2] --- PASS: TestStoresClusterVersionIncompatible/StoreTooOldMinVersion (0.01s)
[08:42:53][Step 2/2] === RUN TestTxnWaitQueueEnableDisable
[08:42:53][Step 2/2] --- PASS: TestTxnWaitQueueEnableDisable (0.10s)
[08:42:53][Step 2/2] === RUN TestTxnWaitQueueCancel
[08:42:53][Step 2/2] --- PASS: TestTxnWaitQueueCancel (0.05s)
[08:42:53][Step 2/2] === RUN TestTxnWaitQueueUpdateTxn
[08:42:53][Step 2/2] --- PASS: TestTxnWaitQueueUpdateTxn (0.05s)
[08:42:53][Step 2/2] === RUN TestTxnWaitQueueTxnSilentlyCompletes
[08:42:53][Step 2/2] --- PASS: TestTxnWaitQueueTxnSilentlyCompletes (0.11s)
[08:42:53][Step 2/2] === RUN TestTxnWaitQueueUpdateNotPushedTxn
[08:42:53][Step 2/2] --- PASS: TestTxnWaitQueueUpdateNotPushedTxn (0.11s)
[08:42:53][Step 2/2] === RUN TestTxnWaitQueuePusheeExpires
[08:42:53][Step 2/2] --- PASS: TestTxnWaitQueuePusheeExpires (0.05s)
[08:42:53][Step 2/2] === RUN TestTxnWaitQueuePusherUpdate
[08:42:53][Step 2/2] === RUN TestTxnWaitQueuePusherUpdate/txnRecordExists=false
[08:42:53][Step 2/2] === RUN TestTxnWaitQueuePusherUpdate/txnRecordExists=true
[08:42:53][Step 2/2] --- PASS: TestTxnWaitQueuePusherUpdate (0.11s)
[08:42:53][Step 2/2] --- PASS: TestTxnWaitQueuePusherUpdate/txnRecordExists=false (0.06s)
[08:42:53][Step 2/2] --- PASS: TestTxnWaitQueuePusherUpdate/txnRecordExists=true (0.04s)
[08:42:53][Step 2/2] === RUN TestTxnWaitQueueDependencyCycle
[08:42:53][Step 2/2] --- PASS: TestTxnWaitQueueDependencyCycle (0.06s)
[08:42:53][Step 2/2] === RUN TestTxnWaitQueueDependencyCycleWithPriorityInversion
[08:42:53][Step 2/2] --- PASS: TestTxnWaitQueueDependencyCycleWithPriorityInversion (0.07s)
[08:42:53][Step 2/2] === RUN TestBelowRaftProtos
[08:42:53][Step 2/2] --- PASS: TestBelowRaftProtos (0.01s)
[08:42:53][Step 2/2] === RUN TestStoreRangeLease
[08:42:53][Step 2/2] === RUN TestStoreRangeLease/enableEpoch=false
[08:42:53][Step 2/2] I181018 08:37:35.503836 39851 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:33945" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:35.526317 39851 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/NodeLivenessMax [r2]
[08:42:53][Step 2/2] I181018 08:37:35.549679 39851 storage/replica_command.go:300 [s1,r2/1:/{System/NodeL…-Max}] initiating a split of this range at key "a" [r3]
[08:42:53][Step 2/2] I181018 08:37:35.577517 39851 storage/replica_command.go:300 [s1,r3/1:{a-/Max}] initiating a split of this range at key "b" [r4]
[08:42:53][Step 2/2] I181018 08:37:35.609516 39851 storage/replica_command.go:300 [s1,r4/1:{b-/Max}] initiating a split of this range at key "c" [r5]
[08:42:53][Step 2/2] I181018 08:37:35.638804 39851 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:53][Step 2/2] === RUN TestStoreRangeLease/enableEpoch=true
[08:42:53][Step 2/2] I181018 08:37:35.757620 40105 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:45641" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:35.777191 40105 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/NodeLivenessMax [r2]
[08:42:53][Step 2/2] I181018 08:37:35.797963 40105 storage/replica_command.go:300 [s1,r2/1:/{System/NodeL…-Max}] initiating a split of this range at key "a" [r3]
[08:42:53][Step 2/2] I181018 08:37:35.827446 40105 storage/replica_command.go:300 [s1,r3/1:{a-/Max}] initiating a split of this range at key "b" [r4]
[08:42:53][Step 2/2] I181018 08:37:35.860786 40105 storage/replica_command.go:300 [s1,r4/1:{b-/Max}] initiating a split of this range at key "c" [r5]
[08:42:53][Step 2/2] I181018 08:37:35.890155 40105 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:53][Step 2/2] I181018 08:37:35.896206 40140 storage/replica_proposal.go:212 [s1,r2/1:{/System/Node…-a}] new range lease repl=(n1,s1):1 seq=2 start=0.000000000,0 epo=1 pro=1.800000125,12 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,7
[08:42:53][Step 2/2] I181018 08:37:35.908236 40143 storage/replica_proposal.go:212 [s1,r4/1:{b-c}] new range lease repl=(n1,s1):1 seq=2 start=0.000000000,0 epo=1 pro=1.800000125,39 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,7
[08:42:53][Step 2/2] I181018 08:37:35.914012 40146 storage/replica_proposal.go:212 [s1,r3/1:{a-b}] new range lease repl=(n1,s1):1 seq=2 start=0.000000000,0 epo=1 pro=1.800000125,58 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,7
[08:42:53][Step 2/2] I181018 08:37:35.921536 40148 storage/replica_proposal.go:212 [s1,r5/1:{c-/Max}] new range lease repl=(n1,s1):1 seq=2 start=0.000000000,0 epo=1 pro=1.800000125,80 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,7
[08:42:53][Step 2/2] --- PASS: TestStoreRangeLease (0.52s)
[08:42:53][Step 2/2] --- PASS: TestStoreRangeLease/enableEpoch=false (0.25s)
[08:42:53][Step 2/2] --- PASS: TestStoreRangeLease/enableEpoch=true (0.25s)
[08:42:53][Step 2/2] === RUN TestStoreRangeLeaseSwitcheroo
[08:42:53][Step 2/2] I181018 08:37:36.003872 39936 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:37097" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:36.024914 39936 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r2]
[08:42:53][Step 2/2] I181018 08:37:36.043251 39936 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:53][Step 2/2] I181018 08:37:36.053688 40221 storage/replica_proposal.go:212 [s1,r2/1:{a-/Max}] new range lease repl=(n1,s1):1 seq=2 start=0.000000000,0 epo=1 pro=1.800000125,3 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
[08:42:53][Step 2/2] I181018 08:37:36.098003 39936 storage/client_test.go:1252 test clock advanced to: 3.600000127,0
[08:42:53][Step 2/2] I181018 08:37:36.152642 39936 storage/client_test.go:1252 test clock advanced to: 5.400000129,1
[08:42:53][Step 2/2] I181018 08:37:36.164539 40362 storage/replica_proposal.go:212 [s1,r2/1:{a-/Max}] new range lease repl=(n1,s1):1 seq=4 start=3.600000127,101 epo=2 pro=3.600000127,108 following repl=(n1,s1):1 seq=3 start=1.800000125,77 exp=4.500000127,42 pro=3.600000127,43
[08:42:53][Step 2/2] I181018 08:37:36.202686 40367 storage/replica_proposal.go:212 [s1,r2/1:{a-/Max}] new range lease repl=(n1,s1):1 seq=5 start=3.600000127,101 epo=3 pro=5.400000129,99 following repl=(n1,s1):1 seq=4 start=3.600000127,101 epo=2 pro=5.400000129,43
[08:42:53][Step 2/2] --- PASS: TestStoreRangeLeaseSwitcheroo (0.28s)
[08:42:53][Step 2/2] === RUN TestStoreGossipSystemData
[08:42:53][Step 2/2] I181018 08:37:36.289091 40569 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:36063" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:36.315317 40569 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r2]
[08:42:53][Step 2/2] I181018 08:37:36.389542 40561 storage/replica_proposal.go:212 [s1,r2/1:/{Table/System…-Max}] new range lease repl=(n1,s1):1 seq=2 start=0.000000123,123 epo=1 pro=0.000000123,226 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
[08:42:53][Step 2/2] I181018 08:37:36.394537 40774 storage/node_liveness.go:451 [hb] heartbeat failed on epoch increment; retrying
[08:42:53][Step 2/2] --- PASS: TestStoreGossipSystemData (0.18s)
[08:42:53][Step 2/2] === RUN TestGossipSystemConfigOnLeaseChange
[08:42:53][Step 2/2] I181018 08:37:36.465105 40674 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:38525" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:36.510170 40674 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:36.511182 40674 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:40819" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:36.512752 41011 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:38525
[08:42:53][Step 2/2] W181018 08:37:36.558128 40674 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:36.558881 40674 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:38111" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:36.560774 41084 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:38525
[08:42:53][Step 2/2] I181018 08:37:36.583237 40674 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot ed9de6aa at applied index 16
[08:42:53][Step 2/2] I181018 08:37:36.589893 40674 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 9ms
[08:42:53][Step 2/2] I181018 08:37:36.592134 41086 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=ed9de6aa, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:53][Step 2/2] I181018 08:37:36.596307 41086 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=1ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:36.600460 40674 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:37:36.612660 40674 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=c7ae5c70] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:36.628122 40674 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot bfe293ac at applied index 18
[08:42:53][Step 2/2] I181018 08:37:36.629688 40674 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:37:36.631299 41115 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=bfe293ac, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:53][Step 2/2] I181018 08:37:36.635101 41115 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:36.639037 40674 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:37:36.664133 40674 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=79f1a428] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] --- PASS: TestGossipSystemConfigOnLeaseChange (0.61s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeTwoEmptyRanges
[08:42:53][Step 2/2] I181018 08:37:37.081739 41119 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:34405" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:37.099369 41119 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:37.123284 41119 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:37.146599 41170 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=51fe86a4] removing replica r2/1
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeTwoEmptyRanges (0.14s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeMetadataCleanup
[08:42:53][Step 2/2] I181018 08:37:37.232689 41248 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:42615" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:37.257448 41248 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:37.289901 41248 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:37.325001 41332 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=982d0dc8] removing replica r2/1
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeMetadataCleanup (0.19s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeWithData
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeWithData/retries=0
[08:42:53][Step 2/2] I181018 08:37:37.428502 41318 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:45239" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:37.457091 41318 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:37.486983 41318 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:37.508937 41318 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:53][Step 2/2] I181018 08:37:37.514375 41450 storage/replica_proposal.go:212 [s1,r2/1:{b-/Max}] new range lease repl=(n1,s1):1 seq=2 start=0.000000000,0 epo=1 pro=1.800000125,3 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
[08:42:53][Step 2/2] I181018 08:37:37.552108 41454 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=204e7db6] removing replica r2/1
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeWithData/retries=3
[08:42:53][Step 2/2] I181018 08:37:37.639361 41438 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:36371" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:37.663776 41438 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:37.688365 41438 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:37.704941 41438 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:53][Step 2/2] I181018 08:37:37.709220 41582 storage/replica_proposal.go:212 [s1,r2/1:{b-/Max}] new range lease repl=(n1,s1):1 seq=2 start=0.000000000,0 epo=1 pro=1.800000125,3 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,6 pro=0.000000123,8
[08:42:53][Step 2/2] I181018 08:37:37.764089 41438 storage/client_test.go:1252 test clock advanced to: 3.600000127,0
[08:42:53][Step 2/2] I181018 08:37:37.806264 41438 storage/client_test.go:1252 test clock advanced to: 5.400000129,0
[08:42:53][Step 2/2] I181018 08:37:37.849682 41438 storage/client_test.go:1252 test clock advanced to: 7.200000131,0
[08:42:53][Step 2/2] I181018 08:37:37.881168 41618 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=69a3f7ac] removing replica r2/1
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeWithData (0.55s)
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeWithData/retries=0 (0.22s)
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeWithData/retries=3 (0.33s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeTimestampCache
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeTimestampCache/disjoint-leaseholders=false
[08:42:53][Step 2/2] I181018 08:37:37.965867 41704 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:39101" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:37.989811 41704 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:38.015273 41704 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:38.036651 41722 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=32374503] removing replica r2/1
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeTimestampCache/disjoint-leaseholders=true
[08:42:53][Step 2/2] I181018 08:37:38.108313 41287 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:42305" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:38.155417 41287 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:38.156512 41287 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45963" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:38.158142 41711 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:42305
[08:42:53][Step 2/2] I181018 08:37:38.179491 41287 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:38.202028 41287 storage/store_snapshot.go:621 [s1,r1/1:{/Min-b}] sending preemptive snapshot ecfadce1 at applied index 18
[08:42:53][Step 2/2] I181018 08:37:38.204186 41287 storage/store_snapshot.go:664 [s1,r1/1:{/Min-b}] streamed snapshot to (n2,s2):?: kv pairs: 17, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:37:38.212806 42068 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 18 (id=ecfadce1, encoded size=2673, 1 rocksdb batches, 8 log entries)
[08:42:53][Step 2/2] I181018 08:37:38.216523 42068 storage/replica_raftstorage.go:810 [s2,r1/?:{/Min-b}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:38.219720 41287 storage/replica_command.go:816 [s1,r1/1:{/Min-b}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:{/Min-b} [(n1,s1):1, next=2, gen=1]
[08:42:53][Step 2/2] I181018 08:37:38.228667 41287 storage/replica.go:3884 [s1,r1/1:{/Min-b},txn=09594273] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:38.513276 41287 storage/store_snapshot.go:621 [s1,r2/1:{b-/Max}] sending preemptive snapshot 242ed914 at applied index 12
[08:42:53][Step 2/2] I181018 08:37:38.514517 41287 storage/store_snapshot.go:664 [s1,r2/1:{b-/Max}] streamed snapshot to (n2,s2):?: kv pairs: 42, log entries: 2, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:37:38.516467 41945 storage/replica_raftstorage.go:804 [s2,r2/?:{-}] applying preemptive snapshot at index 12 (id=242ed914, encoded size=7480, 1 rocksdb batches, 2 log entries)
[08:42:53][Step 2/2] I181018 08:37:38.518566 41945 storage/replica_raftstorage.go:810 [s2,r2/?:{b-/Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:38.521661 41287 storage/replica_command.go:816 [s1,r2/1:{b-/Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:{b-/Max} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:37:38.538206 41287 storage/replica.go:3884 [s1,r2/1:{b-/Max},txn=5c71a929] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:38.594284 41958 storage/replica_proposal.go:212 [s2,r2/2:{b-/Max}] new range lease repl=(n2,s2):2 seq=2 start=0.000000123,305 epo=1 pro=0.000000123,306 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
[08:42:53][Step 2/2] I181018 08:37:38.667241 41287 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:38.704200 41813 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=b6910e4b] removing replica r2/1
[08:42:53][Step 2/2] I181018 08:37:38.704796 41989 storage/store.go:2580 [s2,r1/2:{/Min-b}] removing replica r2/2
[08:42:53][Step 2/2] W181018 08:37:38.736109 41943 storage/raft_transport.go:584 while processing outgoing Raft queue to node 2: rpc error: code = Unavailable desc = transport is closing:
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeTimestampCache (0.84s)
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeTimestampCache/disjoint-leaseholders=false (0.14s)
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeTimestampCache/disjoint-leaseholders=true (0.69s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeTimestampCacheCausality
[08:42:53][Step 2/2] I181018 08:37:38.800149 41931 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:40111" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:38.844829 41931 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:38.845547 41931 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:42341" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:38.847319 42304 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:40111
[08:42:53][Step 2/2] W181018 08:37:38.896254 41931 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:38.896946 41931 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:44169" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:38.899906 42398 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:40111
[08:42:53][Step 2/2] W181018 08:37:38.986048 41931 gossip/gossip.go:1496 [n4] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:38.988024 41931 gossip/gossip.go:393 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:37527" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:38.988830 42531 gossip/client.go:129 [n4] started gossip client to 127.0.0.1:40111
[08:42:53][Step 2/2] I181018 08:37:39.020908 41931 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r2]
[08:42:53][Step 2/2] I181018 08:37:39.054452 41931 storage/replica_command.go:300 [s1,r2/1:{a-/Max}] initiating a split of this range at key "b" [r3]
[08:42:53][Step 2/2] I181018 08:37:39.092497 41931 storage/store_snapshot.go:621 [s1,r2/1:{a-b}] sending preemptive snapshot 126ad496 at applied index 15
[08:42:53][Step 2/2] I181018 08:37:39.093560 41931 storage/store_snapshot.go:664 [s1,r2/1:{a-b}] streamed snapshot to (n2,s2):?: kv pairs: 8, log entries: 5, rate-limit: 2.0 MiB/sec, 8ms
[08:42:53][Step 2/2] I181018 08:37:39.096157 42549 storage/replica_raftstorage.go:804 [s2,r2/?:{-}] applying preemptive snapshot at index 15 (id=126ad496, encoded size=1731, 1 rocksdb batches, 5 log entries)
[08:42:53][Step 2/2] I181018 08:37:39.098569 42549 storage/replica_raftstorage.go:810 [s2,r2/?:{a-b}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:39.106667 41931 storage/replica_command.go:816 [s1,r2/1:{a-b}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:{a-b} [(n1,s1):1, next=2, gen=1]
[08:42:53][Step 2/2] I181018 08:37:39.117192 41931 storage/replica.go:3884 [s1,r2/1:{a-b},txn=1273271a] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:39.126328 41931 storage/store_snapshot.go:621 [s1,r2/1:{a-b}] sending preemptive snapshot 1f69cda5 at applied index 18
[08:42:53][Step 2/2] I181018 08:37:39.127223 41931 storage/store_snapshot.go:664 [s1,r2/1:{a-b}] streamed snapshot to (n3,s3):?: kv pairs: 9, log entries: 8, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:37:39.129082 42525 storage/replica_raftstorage.go:804 [s3,r2/?:{-}] applying preemptive snapshot at index 18 (id=1f69cda5, encoded size=2552, 1 rocksdb batches, 8 log entries)
[08:42:53][Step 2/2] I181018 08:37:39.132039 42525 storage/replica_raftstorage.go:810 [s3,r2/?:{a-b}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:39.135291 41931 storage/replica_command.go:816 [s1,r2/1:{a-b}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r2:{a-b} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
[08:42:53][Step 2/2] I181018 08:37:39.152186 41931 storage/replica.go:3884 [s1,r2/1:{a-b},txn=b4597db5] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] I181018 08:37:39.162073 41931 storage/store_snapshot.go:621 [s1,r2/1:{a-b}] sending preemptive snapshot a2726e02 at applied index 21
[08:42:53][Step 2/2] I181018 08:37:39.163622 41931 storage/store_snapshot.go:664 [s1,r2/1:{a-b}] streamed snapshot to (n4,s4):?: kv pairs: 10, log entries: 11, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:37:39.165159 42567 storage/replica_raftstorage.go:804 [s4,r2/?:{-}] applying preemptive snapshot at index 21 (id=a2726e02, encoded size=3403, 1 rocksdb batches, 11 log entries)
[08:42:53][Step 2/2] I181018 08:37:39.170635 42567 storage/replica_raftstorage.go:810 [s4,r2/?:{a-b}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=4ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:39.174091 41931 storage/replica_command.go:816 [s1,r2/1:{a-b}] change replicas (ADD_REPLICA (n4,s4):4): read existing descriptor r2:{a-b} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=1]
[08:42:53][Step 2/2] I181018 08:37:39.187983 41931 storage/replica.go:3884 [s1,r2/1:{a-b},txn=dd02425b] proposing ADD_REPLICA((n4,s4):4): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3 (n4,s4):4] next=5
[08:42:53][Step 2/2] I181018 08:37:39.252354 42328 storage/replica_proposal.go:212 [s3,r2/3:{a-b}] new range lease repl=(n3,s3):3 seq=2 start=0.000000123,380 epo=1 pro=0.000000123,381 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
[08:42:53][Step 2/2] I181018 08:37:39.313016 41931 storage/replica_command.go:816 [s3,r2/3:{a-b}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r2:{a-b} [(n1,s1):1, (n2,s2):2, (n3,s3):3, (n4,s4):4, next=5, gen=1]
[08:42:53][Step 2/2] I181018 08:37:39.328060 41931 storage/replica.go:3884 [s3,r2/3:{a-b},txn=33693e6b] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n4,s4):4 (n2,s2):2 (n3,s3):3] next=5
[08:42:53][Step 2/2] I181018 08:37:39.337205 42612 storage/store.go:3640 [s1,r2/1:{a-b}] added to replica GC queue (peer suggestion)
[08:42:53][Step 2/2] I181018 08:37:39.339781 42612 storage/store.go:3640 [s1,r2/1:{a-b}] added to replica GC queue (peer suggestion)
[08:42:53][Step 2/2] I181018 08:37:39.346299 42596 storage/store.go:2580 [replicaGC,s1,r2/1:{a-b}] removing replica r2/1
[08:42:53][Step 2/2] I181018 08:37:39.347980 42596 storage/replica.go:863 [replicaGC,s1,r2/1:{a-b}] removed 6 (0+6) keys in 1ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:39.351470 41931 storage/store_snapshot.go:621 [s1,r3/1:{b-/Max}] sending preemptive snapshot 270c08a1 at applied index 12
[08:42:53][Step 2/2] I181018 08:37:39.353086 41931 storage/store_snapshot.go:664 [s1,r3/1:{b-/Max}] streamed snapshot to (n2,s2):?: kv pairs: 42, log entries: 2, rate-limit: 2.0 MiB/sec, 5ms
[08:42:53][Step 2/2] I181018 08:37:39.356168 42587 storage/replica_raftstorage.go:804 [s2,r3/?:{-}] applying preemptive snapshot at index 12 (id=270c08a1, encoded size=7480, 1 rocksdb batches, 2 log entries)
[08:42:53][Step 2/2] I181018 08:37:39.358301 42587 storage/replica_raftstorage.go:810 [s2,r3/?:{b-/Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:39.362620 41931 storage/replica_command.go:816 [s1,r3/1:{b-/Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r3:{b-/Max} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:37:39.376113 41931 storage/replica.go:3884 [s1,r3/1:{b-/Max},txn=cbdb2596] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:39.385424 41931 storage/store_snapshot.go:621 [s1,r3/1:{b-/Max}] sending preemptive snapshot dc57f30f at applied index 14
[08:42:53][Step 2/2] I181018 08:37:39.387323 41931 storage/store_snapshot.go:664 [s1,r3/1:{b-/Max}] streamed snapshot to (n3,s3):?: kv pairs: 44, log entries: 4, rate-limit: 2.0 MiB/sec, 5ms
[08:42:53][Step 2/2] I181018 08:37:39.391836 42600 storage/replica_raftstorage.go:804 [s3,r3/?:{-}] applying preemptive snapshot at index 14 (id=dc57f30f, encoded size=8356, 1 rocksdb batches, 4 log entries)
[08:42:53][Step 2/2] I181018 08:37:39.394980 42600 storage/replica_raftstorage.go:810 [s3,r3/?:{b-/Max}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=1ms]
[08:42:53][Step 2/2] I181018 08:37:39.398540 41931 storage/replica_command.go:816 [s1,r3/1:{b-/Max}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r3:{b-/Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:37:39.423207 41931 storage/replica.go:3884 [s1,r3/1:{b-/Max},txn=a77b16e5] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] I181018 08:37:39.435712 41931 storage/store_snapshot.go:621 [s1,r3/1:{b-/Max}] sending preemptive snapshot 206ed594 at applied index 17
[08:42:53][Step 2/2] I181018 08:37:39.437546 41931 storage/store_snapshot.go:664 [s1,r3/1:{b-/Max}] streamed snapshot to (n4,s4):?: kv pairs: 45, log entries: 7, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:37:39.439646 42573 storage/replica_raftstorage.go:804 [s4,r3/?:{-}] applying preemptive snapshot at index 17 (id=206ed594, encoded size=9205, 1 rocksdb batches, 7 log entries)
[08:42:53][Step 2/2] I181018 08:37:39.444115 42573 storage/replica_raftstorage.go:810 [s4,r3/?:{b-/Max}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:39.447717 41931 storage/replica_command.go:816 [s1,r3/1:{b-/Max}] change replicas (ADD_REPLICA (n4,s4):4): read existing descriptor r3:{b-/Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:53][Step 2/2] I181018 08:37:39.459770 41931 storage/replica.go:3884 [s1,r3/1:{b-/Max},txn=895362a0] proposing ADD_REPLICA((n4,s4):4): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3 (n4,s4):4] next=5
[08:42:53][Step 2/2] I181018 08:37:39.501762 42457 storage/replica_proposal.go:212 [s4,r3/4:{b-/Max}] new range lease repl=(n4,s4):4 seq=2 start=0.000000123,644 epo=1 pro=0.000000123,645 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
[08:42:53][Step 2/2] I181018 08:37:39.620007 41931 storage/replica_command.go:816 [s4,r3/4:{b-/Max}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r3:{b-/Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, (n4,s4):4, next=5, gen=0]
[08:42:53][Step 2/2] I181018 08:37:39.645396 41931 storage/replica.go:3884 [s4,r3/4:{b-/Max},txn=274268fc] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n4,s4):4 (n2,s2):2 (n3,s3):3] next=5
[08:42:53][Step 2/2] I181018 08:37:39.659411 42529 storage/store.go:3640 [s1,r3/1:{b-/Max}] added to replica GC queue (peer suggestion)
[08:42:53][Step 2/2] I181018 08:37:39.659914 42529 storage/store.go:3640 [s1,r3/1:{b-/Max}] added to replica GC queue (peer suggestion)
[08:42:53][Step 2/2] I181018 08:37:39.660199 42529 storage/store.go:3640 [s1,r3/1:{b-/Max}] added to replica GC queue (peer suggestion)
[08:42:53][Step 2/2] I181018 08:37:39.663676 42551 storage/store.go:2580 [replicaGC,s1,r3/1:{b-/Max}] removing replica r3/1
[08:42:53][Step 2/2] I181018 08:37:39.665672 42551 storage/replica.go:863 [replicaGC,s1,r3/1:{b-/Max}] removed 42 (36+6) keys in 1ms [clear=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:39.725740 41931 storage/replica_command.go:432 [s3,r2/3:{a-b}] initiating a merge of r3:{b-/Max} [(n4,s4):4, (n2,s2):2, (n3,s3):3, next=5, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:39.784279 42328 storage/store.go:2580 [s3,r2/3:{a-b},txn=d1f35866] removing replica r3/3
[08:42:53][Step 2/2] I181018 08:37:39.791709 42240 storage/store.go:2580 [s2,r2/2:{a-b}] removing replica r3/2
[08:42:53][Step 2/2] I181018 08:37:39.793745 42497 storage/store.go:2580 [s4,r2/4:{a-b}] removing replica r3/4
[08:42:53][Step 2/2] I181018 08:37:39.827877 42242 storage/replica_proposal.go:212 [s2,r2/2:{a-/Max}] new range lease repl=(n2,s2):2 seq=3 start=0.000000165,764 epo=1 pro=0.000000165,765 following repl=(n3,s3):3 seq=2 start=0.000000123,380 epo=1 pro=0.000000123,381
[08:42:53][Step 2/2] W181018 08:37:39.900475 42527 storage/raft_transport.go:584 while processing outgoing Raft queue to node 4: EOF:
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeTimestampCacheCausality (1.16s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeLastRange
[08:42:53][Step 2/2] I181018 08:37:39.973128 42662 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:42321" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeLastRange (0.10s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeTxnFailure
[08:42:53][Step 2/2] I181018 08:37:40.068922 42552 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:42563" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:40.098262 42552 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:40.121200 42552 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:40.154530 42552 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:40.274080 42656 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] W181018 08:37:40.275371 42870 storage/intent_resolver.go:745 [s1] failed to cleanup transaction intents: could not GC completed transaction anchored at /Local/Range/Min/RangeDescriptor: node unavailable; try another peer
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeTxnFailure (0.29s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeStats
[08:42:53][Step 2/2] I181018 08:37:40.343989 42607 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:35973" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:40.364995 42607 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:41.084082 42607 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:41.114023 42995 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=6086541a] removing replica r2/1
[08:42:53][Step 2/2] I181018 08:37:41.117029 43052 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 3 [async] kv.TxnCoordSender: heartbeat loop
[08:42:53][Step 2/2] I181018 08:37:41.117505 43052 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 2 [async] kv.TxnCoordSender: heartbeat loop
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeStats (0.83s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeInFlightTxns
[08:42:53][Step 2/2] I181018 08:37:41.215107 43039 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:43673" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeInFlightTxns/valid
[08:42:53][Step 2/2] I181018 08:37:41.243102 43188 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:41.274493 43188 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:41.300772 43086 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=4f32a2d1] removing replica r2/1
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeInFlightTxns/abort-span
[08:42:53][Step 2/2] I181018 08:37:41.369396 43203 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r3]
[08:42:53][Step 2/2] I181018 08:37:41.420445 43203 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r3:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:41.454799 43107 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=53375fcf] removing replica r3/1
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeInFlightTxns/wait-queue
[08:42:53][Step 2/2] I181018 08:37:41.513860 43220 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r4]
[08:42:53][Step 2/2] I181018 08:37:41.569437 43220 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r4:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:41.594080 43114 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=76ba7b7c] removing replica r4/1
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeInFlightTxns (0.64s)
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeInFlightTxns/valid (0.13s)
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeInFlightTxns/abort-span (0.14s)
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeInFlightTxns/wait-queue (0.21s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeSplitRace_MergeWins
[08:42:53][Step 2/2] I181018 08:37:41.839038 42940 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41781" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:41.857186 42940 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:41.875963 42940 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:41.884361 43066 storage/replica_command.go:300 [s1,r2/1:{b-/Max}] initiating a split of this range at key "b\x00" [r3]
[08:42:53][Step 2/2] I181018 08:37:42.079244 43340 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] W181018 08:37:42.080450 42909 storage/intent_resolver.go:745 [s1] failed to cleanup transaction intents: failed to resolve intents: result is ambiguous (server shutdown)
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeSplitRace_MergeWins (0.38s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeSplitRace_SplitWins
[08:42:53][Step 2/2] I181018 08:37:42.235311 43229 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:45701" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:42.256293 43229 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:42.280954 43229 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:42.283359 43229 storage/replica_command.go:300 [s1,r2/1:{b-/Max}] initiating a split of this range at key "c" [r3]
[08:42:53][Step 2/2] I181018 08:37:42.349568 43215 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] W181018 08:37:42.351318 43507 storage/intent_resolver.go:745 [s1] failed to cleanup transaction intents: failed to resolve intents: result is ambiguous (server shutdown)
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeSplitRace_SplitWins (0.21s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeRHSLeaseExpiration
[08:42:53][Step 2/2] I181018 08:37:42.419089 43460 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:35741" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:42.466228 43460 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:42.467276 43460 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:43529" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:42.468720 43371 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:35741
[08:42:53][Step 2/2] I181018 08:37:42.539397 43460 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:42.563190 43460 storage/store_snapshot.go:621 [s1,r1/1:{/Min-b}] sending preemptive snapshot c2d6973c at applied index 18
[08:42:53][Step 2/2] I181018 08:37:42.564496 43460 storage/store_snapshot.go:664 [s1,r1/1:{/Min-b}] streamed snapshot to (n2,s2):?: kv pairs: 17, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:37:42.565707 43716 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 18 (id=c2d6973c, encoded size=2673, 1 rocksdb batches, 8 log entries)
[08:42:53][Step 2/2] I181018 08:37:42.568617 43716 storage/replica_raftstorage.go:810 [s2,r1/?:{/Min-b}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:42.571243 43460 storage/replica_command.go:816 [s1,r1/1:{/Min-b}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:{/Min-b} [(n1,s1):1, next=2, gen=1]
[08:42:53][Step 2/2] I181018 08:37:42.578515 43460 storage/replica.go:3884 [s1,r1/1:{/Min-b},txn=5e1d3fed] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:42.728237 43460 storage/store_snapshot.go:621 [s1,r2/1:{b-/Max}] sending preemptive snapshot 1cbb93c2 at applied index 12
[08:42:53][Step 2/2] I181018 08:37:42.729422 43460 storage/store_snapshot.go:664 [s1,r2/1:{b-/Max}] streamed snapshot to (n2,s2):?: kv pairs: 42, log entries: 2, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:37:42.731232 43722 storage/replica_raftstorage.go:804 [s2,r2/?:{-}] applying preemptive snapshot at index 12 (id=1cbb93c2, encoded size=7480, 1 rocksdb batches, 2 log entries)
[08:42:53][Step 2/2] I181018 08:37:42.733376 43722 storage/replica_raftstorage.go:810 [s2,r2/?:{b-/Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:42.735880 43460 storage/replica_command.go:816 [s1,r2/1:{b-/Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:{b-/Max} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:37:42.748756 43460 storage/replica.go:3884 [s1,r2/1:{b-/Max},txn=bb9490bb] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:42.823477 43628 storage/replica_proposal.go:212 [s2,r2/2:{b-/Max}] new range lease repl=(n2,s2):2 seq=2 start=0.000000123,277 epo=1 pro=0.000000123,278 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
[08:42:53][Step 2/2] I181018 08:37:42.856313 43726 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:42.896263 43460 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:53][Step 2/2] I181018 08:37:42.918950 42892 storage/client_merge_test.go:1294 starting get 0
[08:42:53][Step 2/2] I181018 08:37:42.920169 43711 storage/client_merge_test.go:1294 starting get 1
[08:42:53][Step 2/2] I181018 08:37:42.921483 43712 storage/client_merge_test.go:1294 starting get 2
[08:42:53][Step 2/2] I181018 08:37:42.922714 43489 storage/client_merge_test.go:1294 starting get 3
[08:42:53][Step 2/2] I181018 08:37:42.923946 43517 storage/client_merge_test.go:1294 starting get 4
[08:42:53][Step 2/2] I181018 08:37:42.925211 43727 storage/client_merge_test.go:1294 starting get 5
[08:42:53][Step 2/2] I181018 08:37:42.926518 43728 storage/client_merge_test.go:1294 starting get 6
[08:42:53][Step 2/2] I181018 08:37:42.927788 42894 storage/client_merge_test.go:1294 starting get 7
[08:42:53][Step 2/2] I181018 08:37:42.930047 43518 storage/client_merge_test.go:1294 starting get 8
[08:42:53][Step 2/2] I181018 08:37:42.931216 42893 storage/node_liveness.go:729 [s1,r2/1:{b-/Max}] incremented n2 liveness epoch to 2
[08:42:53][Step 2/2] I181018 08:37:42.931304 43729 storage/client_merge_test.go:1294 starting get 9
[08:42:53][Step 2/2] I181018 08:37:42.939885 43469 storage/replica_proposal.go:212 [s1,r2/1:{b-/Max}] new range lease repl=(n1,s1):1 seq=3 start=1.800000125,40 epo=1 pro=1.800000125,41 following repl=(n2,s2):2 seq=2 start=0.000000123,277 epo=1 pro=0.000000123,278
[08:42:53][Step 2/2] I181018 08:37:43.007179 43524 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=ab3b07cc] removing replica r2/1
[08:42:53][Step 2/2] I181018 08:37:43.008129 43663 storage/store.go:2580 [s2,r1/2:{/Min-b}] removing replica r2/2
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeRHSLeaseExpiration (0.68s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeConcurrentRequests
[08:42:53][Step 2/2] I181018 08:37:43.117547 43750 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:35041" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:43.166368 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:43.215325 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:43.271775 43784 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=242e5f00] removing replica r2/1
[08:42:53][Step 2/2] I181018 08:37:43.277484 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r3]
[08:42:53][Step 2/2] I181018 08:37:43.340767 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r3:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:43.391232 43794 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=72314cf4] removing replica r3/1
[08:42:53][Step 2/2] I181018 08:37:43.394762 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r4]
[08:42:53][Step 2/2] I181018 08:37:43.418503 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r4:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:43.442144 43796 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=a6be7aa1] removing replica r4/1
[08:42:53][Step 2/2] I181018 08:37:43.444282 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r5]
[08:42:53][Step 2/2] I181018 08:37:43.472606 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r5:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:43.545527 43804 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=4def9291] removing replica r5/1
[08:42:53][Step 2/2] I181018 08:37:43.550534 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r6]
[08:42:53][Step 2/2] I181018 08:37:43.600348 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r6:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:43.726900 43816 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=07b902bf] removing replica r6/1
[08:42:53][Step 2/2] I181018 08:37:43.732529 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r7]
[08:42:53][Step 2/2] I181018 08:37:43.759944 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r7:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] W181018 08:37:43.779991 43974 storage/intent_resolver.go:675 [s1] failed to push during intent resolution: failed to push "merge" id=af06de79 key=/Local/Range/Min/RangeDescriptor rw=true pri=0.03898758 iso=SERIALIZABLE stat=PENDING epo=0 ts=0.000000123,3442 orig=0.000000123,3442 max=0.000000123,3442 wto=false rop=false seq=1
[08:42:53][Step 2/2] I181018 08:37:43.793907 43823 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=af06de79] removing replica r7/1
[08:42:53][Step 2/2] I181018 08:37:43.796471 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r8]
[08:42:53][Step 2/2] I181018 08:37:43.851039 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r8:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:43.893912 43767 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=181214a4] removing replica r8/1
[08:42:53][Step 2/2] I181018 08:37:43.896784 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r9]
[08:42:53][Step 2/2] I181018 08:37:43.928451 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r9:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:43.997804 43773 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=0f0b5a26] removing replica r9/1
[08:42:53][Step 2/2] I181018 08:37:44.000288 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r10]
[08:42:53][Step 2/2] I181018 08:37:44.020254 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r10:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:44.046039 43789 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=218c6595] removing replica r10/1
[08:42:53][Step 2/2] I181018 08:37:44.049184 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r11]
[08:42:53][Step 2/2] I181018 08:37:44.093392 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r11:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:44.154004 43795 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=20a659b3] removing replica r11/1
[08:42:53][Step 2/2] I181018 08:37:44.156654 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r12]
[08:42:53][Step 2/2] I181018 08:37:44.182601 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r12:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:44.260867 43803 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=dfa1c638] removing replica r12/1
[08:42:53][Step 2/2] I181018 08:37:44.265193 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r13]
[08:42:53][Step 2/2] I181018 08:37:44.305130 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r13:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:44.371980 43812 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=c90bb61b] removing replica r13/1
[08:42:53][Step 2/2] I181018 08:37:44.374771 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r14]
[08:42:53][Step 2/2] I181018 08:37:44.409859 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r14:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:44.442171 43825 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=0a76403c] removing replica r14/1
[08:42:53][Step 2/2] I181018 08:37:44.449141 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r15]
[08:42:53][Step 2/2] I181018 08:37:44.477752 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r15:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:44.529411 43777 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=8bb3d1a7] removing replica r15/1
[08:42:53][Step 2/2] I181018 08:37:44.531685 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r16]
[08:42:53][Step 2/2] I181018 08:37:44.562490 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r16:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:44.627828 43782 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=4babf2ce] removing replica r16/1
[08:42:53][Step 2/2] I181018 08:37:44.630054 43750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r17]
[08:42:53][Step 2/2] I181018 08:37:44.649996 43750 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r17:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:44.676420 43794 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=d5fcc170] removing replica r17/1
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeConcurrentRequests (1.85s)
[08:42:53][Step 2/2] === RUN TestStoreReplicaGCAfterMerge
[08:42:53][Step 2/2] I181018 08:37:44.948695 43993 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:35743" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:44.997801 43993 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:44.998504 43993 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:46825" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:45.004509 44174 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:35743
[08:42:53][Step 2/2] I181018 08:37:45.072058 43993 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 96d028fc at applied index 15
[08:42:53][Step 2/2] I181018 08:37:45.073363 43993 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 48, log entries: 5, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:37:45.074742 43970 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 15 (id=96d028fc, encoded size=8165, 1 rocksdb batches, 5 log entries)
[08:42:53][Step 2/2] I181018 08:37:45.077309 43970 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:45.079989 43993 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:37:45.087866 43993 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=fd4307c1] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:45.380710 43993 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:45.410077 43993 storage/replica_command.go:816 [s1,r1/1:{/Min-b}] change replicas (REMOVE_REPLICA (n2,s2):2): read existing descriptor r1:{/Min-b} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
[08:42:53][Step 2/2] I181018 08:37:45.427134 43993 storage/replica.go:3884 [s1,r1/1:{/Min-b},txn=c27bef08] proposing REMOVE_REPLICA((n2,s2):2): updated=[(n1,s1):1] next=3
[08:42:53][Step 2/2] E181018 08:37:45.439658 44203 storage/replica_proposal.go:721 [s2,r1/2:{/Min-b}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:45.440419 44311 storage/store.go:3638 [s2,r1/2:{/Min-b}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:45.442210 44311 storage/store.go:3638 [s2,r1/2:{/Min-b}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] I181018 08:37:45.445263 43993 storage/replica_command.go:816 [s1,r2/1:{b-/Max}] change replicas (REMOVE_REPLICA (n2,s2):2): read existing descriptor r2:{b-/Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] W181018 08:37:45.492865 44319 storage/intent_resolver.go:675 [s1] failed to push during intent resolution: failed to push "change-replica" id=f6e7cf98 key=/Local/Range"b"/RangeDescriptor rw=true pri=0.00272421 iso=SERIALIZABLE stat=PENDING epo=0 ts=0.000000123,276 orig=0.000000123,276 max=0.000000123,277 wto=false rop=false seq=2
[08:42:53][Step 2/2] I181018 08:37:45.497435 43993 storage/replica.go:3884 [s1,r2/1:{b-/Max},txn=f6e7cf98] proposing REMOVE_REPLICA((n2,s2):2): updated=[(n1,s1):1] next=3
[08:42:53][Step 2/2] E181018 08:37:45.503895 44212 storage/replica_proposal.go:721 [s2,r2/2:{b-/Max}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] I181018 08:37:45.506029 43993 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, next=3, gen=0] into this range
[08:42:53][Step 2/2] E181018 08:37:45.506602 44311 storage/store.go:3638 [s2,r2/2:{b-/Max}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] I181018 08:37:45.538068 44130 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=6bb043a6] removing replica r2/1
[08:42:53][Step 2/2] I181018 08:37:45.543937 43993 storage/store.go:2580 [s2,r1/2:{/Min-b}] removing replica r1/2
[08:42:53][Step 2/2] I181018 08:37:45.545521 43993 storage/replica.go:863 [s2,r1/2:{/Min-b}] removed 13 (8+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:45.547514 43993 storage/store.go:2580 [s2,r2/2:{b-/Max}] removing replica r2/2
[08:42:53][Step 2/2] I181018 08:37:45.548807 43993 storage/replica.go:863 [s2,r2/2:{b-/Max}] removed 42 (36+6) keys in 1ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestStoreReplicaGCAfterMerge (0.69s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeAddReplicaRace
[08:42:53][Step 2/2] I181018 08:37:45.664950 44355 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:34687" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:45.713139 44355 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:45.714137 44355 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45173" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:45.715470 44061 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:34687
[08:42:53][Step 2/2] I181018 08:37:45.734757 44355 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:45.764466 44306 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:45.792386 44402 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=51fdc933] removing replica r2/1
[08:42:53][Step 2/2] I181018 08:37:45.795514 44306 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r3]
[08:42:53][Step 2/2] W181018 08:37:45.871480 44586 storage/intent_resolver.go:675 [s1] failed to push during intent resolution: failed to push "split" id=8969bab7 key=/Local/Range/Min/RangeDescriptor rw=true pri=0.06529451 iso=SERIALIZABLE stat=PENDING epo=0 ts=0.000000123,213 orig=0.000000123,213 max=0.000000123,214 wto=false rop=false seq=1
[08:42:53][Step 2/2] I181018 08:37:45.895269 44355 storage/store_snapshot.go:621 [s1,r1/1:{/Min-b}] sending preemptive snapshot 3d8c8f37 at applied index 18
[08:42:53][Step 2/2] I181018 08:37:45.896641 44355 storage/store_snapshot.go:664 [s1,r1/1:{/Min-b}] streamed snapshot to (n2,s2):?: kv pairs: 17, log entries: 8, rate-limit: 2.0 MiB/sec, 136ms
[08:42:53][Step 2/2] I181018 08:37:45.898158 44306 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 18 (id=3d8c8f37, encoded size=2673, 1 rocksdb batches, 8 log entries)
[08:42:53][Step 2/2] I181018 08:37:45.902016 44306 storage/replica_raftstorage.go:810 [s2,r1/?:{/Min-b}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:45.905432 44355 storage/replica_command.go:816 [s1,r1/1:{/Min-b}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:{/Min-b} [(n1,s1):1, next=2, gen=3]
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeAddReplicaRace (0.35s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeSlowUnabandonedFollower
[08:42:53][Step 2/2] I181018 08:37:45.996752 44334 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:43993" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:46.041133 44334 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:46.042377 44334 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:33747" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:46.044008 44800 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:43993
[08:42:53][Step 2/2] W181018 08:37:46.146517 44334 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:46.147467 44334 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:40957" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:46.149360 44916 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:43993
[08:42:53][Step 2/2] I181018 08:37:46.168651 44334 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 517bf991 at applied index 16
[08:42:53][Step 2/2] I181018 08:37:46.170071 44334 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:37:46.171795 44817 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=517bf991, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:53][Step 2/2] I181018 08:37:46.175235 44817 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:46.178508 44334 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:37:46.186542 44334 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=cad89b44] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:46.206111 44334 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 87d7ac19 at applied index 18
[08:42:53][Step 2/2] I181018 08:37:46.207604 44334 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 5ms
[08:42:53][Step 2/2] I181018 08:37:46.209214 44907 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=87d7ac19, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:53][Step 2/2] I181018 08:37:46.213221 44907 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:46.216782 44334 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:37:46.240382 44334 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=343e452d] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] I181018 08:37:46.531685 44334 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:46.564460 44334 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0] into this range
[08:42:53][Step 2/2] W181018 08:37:46.618616 44720 storage/intent_resolver.go:675 [s1] failed to push during intent resolution: failed to push "merge" id=1931ff0c key=/Local/Range/Min/RangeDescriptor rw=true pri=0.07419935 iso=SERIALIZABLE stat=PENDING epo=0 ts=0.000000123,367 orig=0.000000123,367 max=0.000000123,367 wto=false rop=false seq=1
[08:42:53][Step 2/2] I181018 08:37:46.635953 44752 storage/store.go:2580 [s2,r1/2:{/Min-b}] removing replica r2/2
[08:42:53][Step 2/2] I181018 08:37:46.636035 44598 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=1931ff0c] removing replica r2/1
[08:42:53][Step 2/2] I181018 08:37:46.645333 44843 storage/store.go:2580 [s3,r1/3:{/Min-b}] removing replica r2/3
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeSlowUnabandonedFollower (0.77s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeSlowAbandonedFollower
[08:42:53][Step 2/2] I181018 08:37:46.773041 44482 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:45187" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:46.825034 44482 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:46.825972 44482 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:32839" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:46.829288 45085 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:45187
[08:42:53][Step 2/2] W181018 08:37:46.902716 44482 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:46.903616 44482 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:32827" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:46.905429 45315 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:45187
[08:42:53][Step 2/2] I181018 08:37:46.928600 44482 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 677c8139 at applied index 16
[08:42:53][Step 2/2] I181018 08:37:46.930288 44482 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 5ms
[08:42:53][Step 2/2] I181018 08:37:46.934258 45319 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=677c8139, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:53][Step 2/2] I181018 08:37:46.937656 45319 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:46.942637 44482 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:37:46.951888 44482 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=896db8fc] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:46.963674 44482 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot f9175c37 at applied index 18
[08:42:53][Step 2/2] I181018 08:37:46.965233 44482 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:37:46.967382 45348 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=f9175c37, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:53][Step 2/2] I181018 08:37:46.971296 45348 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:46.976333 44482 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:37:47.004262 44482 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=19f560c7] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] I181018 08:37:47.182694 44482 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:47.242241 44482 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0] into this range
[08:42:53][Step 2/2] W181018 08:37:47.291869 45353 storage/intent_resolver.go:675 [s1] failed to push during intent resolution: failed to push "merge" id=e34429d9 key=/Local/Range/Min/RangeDescriptor rw=true pri=0.00207834 iso=SERIALIZABLE stat=PENDING epo=0 ts=0.000000123,315 orig=0.000000123,315 max=0.000000123,315 wto=false rop=false seq=1
[08:42:53][Step 2/2] I181018 08:37:47.329603 45153 storage/store.go:2580 [s2,r1/2:{/Min-b}] removing replica r2/2
[08:42:53][Step 2/2] I181018 08:37:47.330654 45075 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=e34429d9] removing replica r2/1
[08:42:53][Step 2/2] I181018 08:37:47.335478 44482 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=2]
[08:42:53][Step 2/2] I181018 08:37:47.353297 44482 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0cfcbe77] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2] next=4
[08:42:53][Step 2/2] E181018 08:37:47.373449 45364 storage/store.go:3638 [s3,r1/3:{/Min-b}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] I181018 08:37:47.378743 45258 storage/store.go:2580 [s3,r1/3:{/Min-b}] removing replica r2/3
[08:42:53][Step 2/2] E181018 08:37:47.380291 45364 storage/store.go:3638 [s3,r1/3:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:47.380814 45364 storage/store.go:3638 [s3,r1/3:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:47.381267 45258 storage/replica_proposal.go:721 [s3,r1/3:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:47.381849 45364 storage/store.go:3638 [s3,r1/3:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:47.382207 45364 storage/store.go:3638 [s3,r1/3:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:47.382464 45364 storage/store.go:3638 [s3,r1/3:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:47.382654 45364 storage/store.go:3638 [s3,r1/3:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:47.382827 45364 storage/store.go:3638 [s3,r1/3:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:47.394843 45364 storage/store.go:3638 [s3,r1/3:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeSlowAbandonedFollower (0.74s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeAbandonedFollowers
[08:42:53][Step 2/2] I181018 08:37:47.513115 45395 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:37949" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:47.558754 45395 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:47.559638 45395 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:44601" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:47.561310 45592 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:37949
[08:42:53][Step 2/2] W181018 08:37:47.662965 45395 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:47.664086 45395 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:40801" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:47.665727 45722 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:37949
[08:42:53][Step 2/2] I181018 08:37:47.688083 45395 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot d869f0f7 at applied index 16
[08:42:53][Step 2/2] I181018 08:37:47.689602 45395 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:37:47.695934 45725 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=d869f0f7, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:53][Step 2/2] I181018 08:37:47.699914 45725 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:47.704017 45395 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:37:47.715942 45395 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=298b353b] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:47.727114 45395 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 2fd5cfc3 at applied index 18
[08:42:53][Step 2/2] I181018 08:37:47.729444 45395 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 5ms
[08:42:53][Step 2/2] I181018 08:37:47.730679 45330 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=2fd5cfc3, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:53][Step 2/2] I181018 08:37:47.733388 45330 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:47.736367 45395 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:37:47.755116 45395 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b6fb3c8a] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] I181018 08:37:47.927796 45395 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r2]
[08:42:53][Step 2/2] I181018 08:37:47.982385 45395 storage/replica_command.go:300 [s1,r2/1:{a-/Max}] initiating a split of this range at key "b" [r3]
[08:42:53][Step 2/2] I181018 08:37:48.060126 45395 storage/replica_command.go:300 [s1,r3/1:{b-/Max}] initiating a split of this range at key "c" [r4]
[08:42:53][Step 2/2] I181018 08:37:48.139943 45395 storage/replica_command.go:816 [s1,r2/1:{a-b}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r2:{a-b} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=1]
[08:42:53][Step 2/2] I181018 08:37:48.171201 45395 storage/replica.go:3884 [s1,r2/1:{a-b},txn=86c9a680] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2] next=4
[08:42:53][Step 2/2] E181018 08:37:48.177322 45699 storage/replica_proposal.go:721 [s3,r2/3:{a-b}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] I181018 08:37:48.181199 45395 storage/replica_command.go:816 [s1,r3/1:{b-c}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r3:{b-c} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=1]
[08:42:53][Step 2/2] E181018 08:37:48.183503 45506 storage/store.go:3638 [s3,r2/3:{a-b}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:48.184051 45506 storage/store.go:3638 [s3,r2/3:{a-b}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:48.206773 45506 storage/store.go:3638 [s3,r2/3:{a-b}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] I181018 08:37:48.211168 45395 storage/replica.go:3884 [s1,r3/1:{b-c},txn=afe65f98] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2] next=4
[08:42:53][Step 2/2] E181018 08:37:48.217151 45646 storage/replica_proposal.go:721 [s3,r3/3:{b-c}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:48.218679 45506 storage/store.go:3638 [s3,r3/3:{b-c}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] I181018 08:37:48.223736 45395 storage/replica_command.go:816 [s1,r4/1:{c-/Max}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r4:{c-/Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:53][Step 2/2] I181018 08:37:48.263098 45395 storage/replica.go:3884 [s1,r4/1:{c-/Max},txn=c94800fe] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2] next=4
[08:42:53][Step 2/2] E181018 08:37:48.270225 45664 storage/replica_proposal.go:721 [s3,r4/3:{c-/Max}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:48.270718 45506 storage/store.go:3638 [s3,r4/3:{c-/Max}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] I181018 08:37:48.272002 45395 storage/replica_command.go:432 [s1,r2/1:{a-b}] initiating a merge of r3:{b-c} [(n1,s1):1, (n2,s2):2, next=4, gen=1] into this range
[08:42:53][Step 2/2] I181018 08:37:48.320955 45401 storage/store.go:2580 [s1,r2/1:{a-b},txn=48adad00] removing replica r3/1
[08:42:53][Step 2/2] I181018 08:37:48.322232 45557 storage/store.go:2580 [s2,r2/2:{a-b}] removing replica r3/2
[08:42:53][Step 2/2] I181018 08:37:48.325953 45395 storage/replica_command.go:432 [s1,r2/1:{a-c}] initiating a merge of r4:{c-/Max} [(n1,s1):1, (n2,s2):2, next=4, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:48.377851 45423 storage/store.go:2580 [s1,r2/1:{a-c},txn=bf43b1c2] removing replica r4/1
[08:42:53][Step 2/2] I181018 08:37:48.378464 45388 storage/store.go:2580 [s2,r2/2:{a-c}] removing replica r4/2
[08:42:53][Step 2/2] I181018 08:37:48.396891 45395 storage/store.go:2580 [s3,r2/3:{a-b}] removing replica r2/3
[08:42:53][Step 2/2] I181018 08:37:48.398158 45395 storage/replica.go:863 [s3,r2/3:{a-b}] removed 6 (0+6) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:48.403541 45395 storage/store.go:2580 [s3,r3/3:{b-c}] removing replica r3/3
[08:42:53][Step 2/2] I181018 08:37:48.404764 45395 storage/replica.go:863 [s3,r3/3:{b-c}] removed 6 (0+6) keys in 0ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:48.407134 45395 storage/store.go:2580 [s3,r4/3:{c-/Max}] removing replica r4/3
[08:42:53][Step 2/2] I181018 08:37:48.408404 45395 storage/replica.go:863 [s3,r4/3:{c-/Max}] removed 42 (36+6) keys in 1ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] W181018 08:37:48.437663 45727 storage/raft_transport.go:584 while processing outgoing Raft queue to node 2: EOF:
[08:42:53][Step 2/2] W181018 08:37:48.438220 45748 storage/raft_transport.go:584 while processing outgoing Raft queue to node 3: rpc error: code = Unavailable desc = transport is closing:
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeAbandonedFollowers (1.00s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeDeadFollower
[08:42:53][Step 2/2] I181018 08:37:48.506745 45775 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:37461" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:48.559422 45775 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:48.560478 45775 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45065" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:48.563387 46036 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:37461
[08:42:53][Step 2/2] W181018 08:37:48.610379 45775 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:48.611505 45775 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:36145" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:48.614968 46139 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:37461
[08:42:53][Step 2/2] I181018 08:37:48.640968 45775 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 820f458d at applied index 16
[08:42:53][Step 2/2] I181018 08:37:48.642223 45775 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:37:48.643902 46180 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=820f458d, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:53][Step 2/2] I181018 08:37:48.646384 46180 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:48.657486 45775 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:37:48.665448 45775 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6f63f2b9] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:48.675388 45775 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot d825c289 at applied index 18
[08:42:53][Step 2/2] I181018 08:37:48.676561 45775 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:37:48.677822 45816 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=d825c289, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:53][Step 2/2] I181018 08:37:48.680736 45816 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:48.683144 45775 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:37:48.698831 45775 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=c35324ac] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] I181018 08:37:48.992438 45775 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:49.067476 45775 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:49.134353 46195 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] 1 [async] wait-for-merge
[08:42:53][Step 2/2] I181018 08:37:49.135284 46195 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] W181018 08:37:49.137173 46049 storage/intent_resolver.go:745 [s1] failed to cleanup transaction intents: failed to resolve intents: failed to send RPC: sending to all 3 replicas failed; last error: <nil> failed to send RPC: store is stopped
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeDeadFollower (0.72s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeReadoptedBothFollowers
[08:42:53][Step 2/2] I181018 08:37:49.226468 45921 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:37401" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:49.274753 45921 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:49.275694 45921 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:33759" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:49.278722 46193 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:37401
[08:42:53][Step 2/2] W181018 08:37:49.325324 45921 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:49.326104 45921 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:40427" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:49.327750 46563 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:37401
[08:42:53][Step 2/2] I181018 08:37:49.397628 45921 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot fa94d009 at applied index 16
[08:42:53][Step 2/2] I181018 08:37:49.398944 45921 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:37:49.400181 46471 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=fa94d009, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:53][Step 2/2] I181018 08:37:49.403507 46471 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:49.406639 45921 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:37:49.415660 45921 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=67ea24b3] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:49.426826 45921 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 3a3062cd at applied index 18
[08:42:53][Step 2/2] I181018 08:37:49.428158 45921 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:37:49.429939 46581 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=3a3062cd, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:53][Step 2/2] I181018 08:37:49.433901 46581 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:49.436952 45921 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:37:49.458487 45921 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=d715bda2] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] I181018 08:37:49.754284 45921 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:49.828538 45921 storage/replica_command.go:816 [s1,r1/1:{/Min-b}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r1:{/Min-b} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=1]
[08:42:53][Step 2/2] I181018 08:37:49.839919 45921 storage/replica.go:3884 [s1,r1/1:{/Min-b},txn=a1122c29] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2] next=4
[08:42:53][Step 2/2] E181018 08:37:49.847469 46464 storage/replica_proposal.go:721 [s3,r1/3:{/Min-b}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:49.849470 46400 storage/store.go:3638 [s3,r1/3:{/Min-b}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] I181018 08:37:49.852014 45921 storage/replica_command.go:816 [s1,r2/1:{b-/Max}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r2:{b-/Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:53][Step 2/2] E181018 08:37:49.868879 46400 storage/store.go:3638 [s3,r1/3:{/Min-b}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] I181018 08:37:49.869278 45921 storage/replica.go:3884 [s1,r2/1:{b-/Max},txn=7be75f9f] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2] next=4
[08:42:53][Step 2/2] I181018 08:37:49.876909 45921 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, (n2,s2):2, next=4, gen=0] into this range
[08:42:53][Step 2/2] E181018 08:37:49.877806 46497 storage/replica_proposal.go:721 [s3,r2/3:{b-/Max}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:49.884833 46400 storage/store.go:3638 [s3,r2/3:{b-/Max}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:49.919283 46400 storage/store.go:3638 [s3,r2/3:{b-/Max}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] I181018 08:37:49.932210 46256 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=5e2611a4] removing replica r2/1
[08:42:53][Step 2/2] I181018 08:37:49.933347 46362 storage/store.go:2580 [s2,r1/2:{/Min-b}] removing replica r2/2
[08:42:53][Step 2/2] I181018 08:37:49.945245 45921 storage/store.go:2580 [s3,r1/3:{/Min-b}] removing replica r1/3
[08:42:53][Step 2/2] I181018 08:37:49.947206 45921 storage/replica.go:863 [s3,r1/3:{/Min-b}] removed 15 (10+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:49.953626 45921 storage/store.go:2580 [s3,r2/3:{b-/Max}] removing replica r2/3
[08:42:53][Step 2/2] I181018 08:37:49.954931 45921 storage/replica.go:863 [s3,r2/3:{b-/Max}] removed 43 (37+6) keys in 1ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:49.958403 45921 storage/store_snapshot.go:621 sending preemptive snapshot 667f57e6 at applied index 35
[08:42:53][Step 2/2] I181018 08:37:49.960052 45921 storage/store_snapshot.go:664 streamed snapshot to (n3,s3):?: kv pairs: 77, log entries: 25, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:37:49.961181 46412 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 35 (id=667f57e6, encoded size=15892, 1 rocksdb batches, 25 log entries)
[08:42:53][Step 2/2] I181018 08:37:49.968375 46412 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=6ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:49.970835 45921 storage/replica_command.go:816 change replicas (ADD_REPLICA (n3,s3):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=4, gen=2]
[08:42:53][Step 2/2] I181018 08:37:49.981955 45921 storage/replica.go:3884 [txn=85a7add4,s1,r1/1:/M{in-ax}] proposing ADD_REPLICA((n3,s3):4): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):4] next=5
[08:42:53][Step 2/2] W181018 08:37:50.038031 46569 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: rpc error: code = Canceled desc = grpc: the client connection is closing:
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeReadoptedBothFollowers (0.89s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeReadoptedLHSFollower
[08:42:53][Step 2/2] I181018 08:37:50.123470 46588 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:43367" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:50.170178 46588 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:50.171198 46588 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:44937" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:50.172805 46823 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:43367
[08:42:53][Step 2/2] W181018 08:37:50.220445 46588 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:50.221470 46588 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:46713" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:50.222940 46941 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:43367
[08:42:53][Step 2/2] I181018 08:37:50.240117 46588 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:50.259243 46588 storage/store_snapshot.go:621 [s1,r1/1:{/Min-b}] sending preemptive snapshot 18a5c1ec at applied index 19
[08:42:53][Step 2/2] I181018 08:37:50.260270 46588 storage/store_snapshot.go:664 [s1,r1/1:{/Min-b}] streamed snapshot to (n2,s2):?: kv pairs: 18, log entries: 9, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:37:50.261221 46722 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 19 (id=18a5c1ec, encoded size=2864, 1 rocksdb batches, 9 log entries)
[08:42:53][Step 2/2] I181018 08:37:50.264095 46722 storage/replica_raftstorage.go:810 [s2,r1/?:{/Min-b}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:50.266907 46588 storage/replica_command.go:816 [s1,r1/1:{/Min-b}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:{/Min-b} [(n1,s1):1, next=2, gen=1]
[08:42:53][Step 2/2] I181018 08:37:50.275816 46588 storage/replica.go:3884 [s1,r1/1:{/Min-b},txn=b7416d52] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:50.438950 46588 storage/store_snapshot.go:621 [s1,r2/1:{b-/Max}] sending preemptive snapshot 0df34465 at applied index 12
[08:42:53][Step 2/2] I181018 08:37:50.440266 46588 storage/store_snapshot.go:664 [s1,r2/1:{b-/Max}] streamed snapshot to (n2,s2):?: kv pairs: 42, log entries: 2, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:37:50.442301 46954 storage/replica_raftstorage.go:804 [s2,r2/?:{-}] applying preemptive snapshot at index 12 (id=0df34465, encoded size=7480, 1 rocksdb batches, 2 log entries)
[08:42:53][Step 2/2] I181018 08:37:50.444285 46954 storage/replica_raftstorage.go:810 [s2,r2/?:{b-/Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:50.447387 46588 storage/replica_command.go:816 [s1,r2/1:{b-/Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:{b-/Max} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:37:50.461093 46588 storage/replica.go:3884 [s1,r2/1:{b-/Max},txn=5424eaf3] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:50.489527 46588 storage/store_snapshot.go:621 [s1,r1/1:{/Min-b}] sending preemptive snapshot 329ad1b6 at applied index 23
[08:42:53][Step 2/2] I181018 08:37:50.490714 46588 storage/store_snapshot.go:664 [s1,r1/1:{/Min-b}] streamed snapshot to (n3,s3):?: kv pairs: 22, log entries: 13, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:37:50.492904 46996 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 23 (id=329ad1b6, encoded size=4138, 1 rocksdb batches, 13 log entries)
[08:42:53][Step 2/2] I181018 08:37:50.497374 46996 storage/replica_raftstorage.go:810 [s3,r1/?:{/Min-b}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:50.500752 46588 storage/replica_command.go:816 [s1,r1/1:{/Min-b}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:{/Min-b} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
[08:42:53][Step 2/2] I181018 08:37:50.512365 46588 storage/replica.go:3884 [s1,r1/1:{/Min-b},txn=a76c119d] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] I181018 08:37:50.660489 46588 storage/replica_command.go:816 [s1,r1/1:{/Min-b}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r1:{/Min-b} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=1]
[08:42:53][Step 2/2] I181018 08:37:50.674078 46588 storage/replica.go:3884 [s1,r1/1:{/Min-b},txn=c816cfe2] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2] next=4
[08:42:53][Step 2/2] E181018 08:37:50.681159 46869 storage/replica_proposal.go:721 [s3,r1/3:{/Min-b}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] I181018 08:37:50.683443 46588 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0] into this range
[08:42:53][Step 2/2] E181018 08:37:50.683956 46864 storage/store.go:3638 [s3,r1/3:{/Min-b}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:50.691827 46864 storage/store.go:3638 [s3,r1/3:{/Min-b}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] I181018 08:37:50.749450 46629 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=a27f1b7d] removing replica r2/1
[08:42:53][Step 2/2] I181018 08:37:50.750793 46793 storage/store.go:2580 [s2,r1/2:{/Min-b}] removing replica r2/2
[08:42:53][Step 2/2] I181018 08:37:50.757610 46588 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot f6689e4a at applied index 33
[08:42:53][Step 2/2] I181018 08:37:50.759473 46588 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 74, log entries: 23, rate-limit: 2.0 MiB/sec, 5ms
[08:42:53][Step 2/2] I181018 08:37:50.761589 47043 storage/replica_raftstorage.go:804 [s3,r1/3:{/Min-b}] applying preemptive snapshot at index 33 (id=f6689e4a, encoded size=15273, 1 rocksdb batches, 23 log entries)
[08:42:53][Step 2/2] I181018 08:37:50.770623 47043 storage/replica_raftstorage.go:810 [s3,r1/3:/M{in-ax}] applied preemptive snapshot in 9ms [clear=0ms batch=0ms entries=7ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:50.773515 46588 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=4, gen=2]
[08:42:53][Step 2/2] I181018 08:37:50.790042 46588 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b5adb155] proposing ADD_REPLICA((n3,s3):4): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):4] next=5
[08:42:53][Step 2/2] W181018 08:37:50.910299 46984 storage/raft_transport.go:282 unable to accept Raft message from (n3,s3):4: no handler registered for (n1,s1):1
[08:42:53][Step 2/2] W181018 08:37:50.911536 46864 storage/store.go:3662 [s3,r1/4:/M{in-ax}] raft error: node 1 claims to not contain store 1 for replica (n1,s1):1: store 1 was not found
[08:42:53][Step 2/2] W181018 08:37:50.911855 46862 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: store 1 was not found:
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeReadoptedLHSFollower (0.88s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeWatcher
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeWatcher/inject-failures=false
[08:42:53][Step 2/2] I181018 08:37:50.992141 46848 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:34725" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:51.046143 46848 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:51.047307 46848 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:42985" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:51.053257 46962 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:34725
[08:42:53][Step 2/2] W181018 08:37:51.153334 46848 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:51.155288 46848 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:43059" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:51.157030 47241 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:34725
[08:42:53][Step 2/2] I181018 08:37:51.230772 46848 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot de64abc1 at applied index 16
[08:42:53][Step 2/2] I181018 08:37:51.232199 46848 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:37:51.234566 47071 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=de64abc1, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:53][Step 2/2] I181018 08:37:51.238090 47071 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:51.241511 46848 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:37:51.254749 46848 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=94466cc3] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:51.274853 46848 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot cece94bd at applied index 18
[08:42:53][Step 2/2] I181018 08:37:51.276541 46848 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:37:51.278620 47373 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=cece94bd, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:53][Step 2/2] I181018 08:37:51.282632 47373 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:51.286155 46848 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:37:51.310476 46848 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=982f1f80] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] I181018 08:37:51.601158 46848 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:51.684204 47298 storage/replica_proposal.go:212 [s3,r2/3:{b-/Max}] new range lease repl=(n3,s3):3 seq=2 start=0.000000123,387 epo=1 pro=0.000000123,388 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
[08:42:53][Step 2/2] I181018 08:37:51.699931 46848 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:51.755045 47081 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=18d6387b] removing replica r2/1
[08:42:53][Step 2/2] I181018 08:37:51.757104 47202 storage/store.go:2580 [s2,r1/2:{/Min-b}] removing replica r2/2
[08:42:53][Step 2/2] I181018 08:37:51.763997 47301 storage/store.go:2580 [s3,r1/3:{/Min-b}] removing replica r2/3
[08:42:53][Step 2/2] W181018 08:37:51.797997 47395 storage/raft_transport.go:282 unable to accept Raft message from (n3,s3):?: no handler registered for (n1,s1):?
[08:42:53][Step 2/2] W181018 08:37:51.798748 47395 storage/raft_transport.go:282 unable to accept Raft message from (n3,s3):?: no handler registered for (n1,s1):?
[08:42:53][Step 2/2] W181018 08:37:51.799458 47250 storage/store.go:3662 [s3] raft error: node 1 claims to not contain store 1 for replica (n1,s1):?: store 1 was not found
[08:42:53][Step 2/2] W181018 08:37:51.799676 47382 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: store 1 was not found:
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeWatcher/inject-failures=true
[08:42:53][Step 2/2] I181018 08:37:51.879078 47397 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:33435" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:51.921965 47397 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:51.923043 47397 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45751" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:51.925425 47528 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:33435
[08:42:53][Step 2/2] W181018 08:37:51.972256 47397 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:51.973371 47397 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:45239" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:51.974828 47547 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:33435
[08:42:53][Step 2/2] I181018 08:37:51.996494 47397 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot f6ae8f56 at applied index 16
[08:42:53][Step 2/2] I181018 08:37:51.998045 47397 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:37:52.001246 47551 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=f6ae8f56, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:53][Step 2/2] I181018 08:37:52.005305 47551 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=1ms]
[08:42:53][Step 2/2] I181018 08:37:52.010471 47397 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:37:52.018430 47397 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=1725da3f] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:52.029488 47397 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot e4157c4e at applied index 18
[08:42:53][Step 2/2] I181018 08:37:52.031009 47397 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:37:52.033422 47789 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=e4157c4e, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:53][Step 2/2] I181018 08:37:52.037538 47789 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:52.040413 47397 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:37:52.058016 47397 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=5e503a17] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] I181018 08:37:52.218165 47397 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:52.293596 47698 storage/replica_proposal.go:212 [s3,r2/3:{b-/Max}] new range lease repl=(n3,s3):3 seq=2 start=0.000000123,321 epo=1 pro=0.000000123,322 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
[08:42:53][Step 2/2] I181018 08:37:52.304545 47397 storage/replica_command.go:432 [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0] into this range
[08:42:53][Step 2/2] W181018 08:37:52.349654 47812 storage/replica.go:2902 [s3,r2/3:{b-/Max}] error while watching for merge to complete: PushTxn: storage/client_merge_test.go:2111: injected failure
[08:42:53][Step 2/2] W181018 08:37:52.401867 47812 storage/replica.go:2902 [s3,r2/3:{b-/Max}] error while watching for merge to complete: PushTxn: storage/client_merge_test.go:2111: injected failure
[08:42:53][Step 2/2] W181018 08:37:52.511481 47812 storage/replica.go:2902 [s3,r2/3:{b-/Max}] error while watching for merge to complete: PushTxn: storage/client_merge_test.go:2111: injected failure
[08:42:53][Step 2/2] W181018 08:37:52.735266 47812 storage/replica.go:2946 [s3,r2/3:{b-/Max}] error while watching for merge to complete: Get /Meta2/Max: storage/client_merge_test.go:2116: injected failure
[08:42:53][Step 2/2] W181018 08:37:52.784543 47812 storage/replica.go:2946 [s3,r2/3:{b-/Max}] error while watching for merge to complete: Get /Meta2/Max: storage/client_merge_test.go:2116: injected failure
[08:42:53][Step 2/2] W181018 08:37:52.880069 47812 storage/replica.go:2946 [s3,r2/3:{b-/Max}] error while watching for merge to complete: Get /Meta2/Max: storage/client_merge_test.go:2116: injected failure
[08:42:53][Step 2/2] I181018 08:37:53.210683 47472 storage/store.go:2580 [s1,r1/1:{/Min-b},txn=242a62af] removing replica r2/1
[08:42:53][Step 2/2] I181018 08:37:53.211619 47592 storage/store.go:2580 [s2,r1/2:{/Min-b}] removing replica r2/2
[08:42:53][Step 2/2] E181018 08:37:53.216617 47772 storage/store.go:3657 [s3,r2/3:{b-/Max}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:37:53.216674 47768 storage/store.go:3657 [s3,r2/3:{b-/Max}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] I181018 08:37:53.233283 47699 storage/store.go:2580 [s3,r1/3:{/Min-b}] removing replica r2/3
[08:42:53][Step 2/2] W181018 08:37:53.251651 47447 storage/node_liveness.go:454 [hb] failed node liveness heartbeat: node unavailable; try another peer
[08:42:53][Step 2/2] I181018 08:37:53.253234 47847 internal/client/txn.go:637 async rollback failed: node unavailable; try another peer
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeWatcher (2.35s)
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeWatcher/inject-failures=false (0.88s)
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeWatcher/inject-failures=true (1.46s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeSlowWatcher
[08:42:53][Step 2/2] --- SKIP: TestStoreRangeMergeSlowWatcher (0.01s)
[08:42:53][Step 2/2] client_merge_test.go:2211: flawed test: deadlocks if merge transaction retries
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeRaftSnapshot
[08:42:53][Step 2/2] I181018 08:37:53.348931 47859 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:35265" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:53.395934 47859 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:53.397000 47859 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:34595" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:53.399416 48075 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:35265
[08:42:53][Step 2/2] W181018 08:37:53.442046 47859 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:53.442898 47859 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:35397" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:53.446752 47870 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:35265
[08:42:53][Step 2/2] I181018 08:37:53.515114 47859 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 83ec10ff at applied index 16
[08:42:53][Step 2/2] I181018 08:37:53.516713 47859 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:37:53.518564 48215 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=83ec10ff, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:53][Step 2/2] I181018 08:37:53.521419 48215 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:53.525289 47859 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:37:53.532914 47859 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=fa5831c2] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:37:53.543267 47859 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot c50a109d at applied index 18
[08:42:53][Step 2/2] I181018 08:37:53.544907 47859 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:37:53.546662 48077 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=c50a109d, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:53][Step 2/2] I181018 08:37:53.549981 48077 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:53.554197 47859 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:37:53.570257 47859 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=9cee5ee1] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] I181018 08:37:53.727519 47859 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r2]
[08:42:53][Step 2/2] I181018 08:37:53.787732 47859 storage/replica_command.go:300 [s1,r2/1:{a-/Max}] initiating a split of this range at key "b" [r3]
[08:42:53][Step 2/2] I181018 08:37:53.872777 47859 storage/replica_command.go:300 [s1,r3/1:{b-/Max}] initiating a split of this range at key "c" [r4]
[08:42:53][Step 2/2] I181018 08:37:53.953860 47859 storage/replica_command.go:432 [s1,r2/1:{a-b}] initiating a merge of r3:{b-c} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=1] into this range
[08:42:53][Step 2/2] I181018 08:37:53.998102 47909 storage/store.go:2580 [s1,r2/1:{a-b},txn=57519f66] removing replica r3/1
[08:42:53][Step 2/2] I181018 08:37:53.999154 48000 storage/store.go:2580 [s2,r2/2:{a-b}] removing replica r3/2
[08:42:53][Step 2/2] I181018 08:37:54.002562 47859 storage/replica_command.go:432 [s1,r2/1:{a-c}] initiating a merge of r4:{c-/Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:54.056130 47936 storage/store.go:2580 [s1,r2/1:{a-c},txn=85740175] removing replica r4/1
[08:42:53][Step 2/2] I181018 08:37:54.058715 48016 storage/store.go:2580 [s2,r2/2:{a-c}] removing replica r4/2
[08:42:53][Step 2/2] I181018 08:37:54.120440 47859 storage/replica_command.go:300 [s1,r2/1:{a-/Max}] initiating a split of this range at key "d" [r5]
[08:42:53][Step 2/2] I181018 08:37:54.178619 48109 storage/store_snapshot.go:621 [raftsnapshot,s1,r2/1:{a-d}] sending Raft snapshot f29f042b at applied index 28
[08:42:53][Step 2/2] I181018 08:37:54.179646 48109 storage/store_snapshot.go:664 [raftsnapshot,s1,r2/1:{a-d}] streamed snapshot to (n3,s3):3: kv pairs: 18, log entries: 3, rate-limit: 8.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:37:54.180659 48112 storage/replica_raftstorage.go:804 [s3,r2/3:{a-b}] applying Raft snapshot at index 28 (id=f29f042b, encoded size=1858, 1 rocksdb batches, 3 log entries)
[08:42:53][Step 2/2] I181018 08:37:54.183013 48112 storage/store.go:2580 [s3,r2/3:{a-b}] removing replica r3/3
[08:42:53][Step 2/2] I181018 08:37:54.183948 48112 storage/store.go:2580 [s3,r2/3:{a-b}] removing replica r4/3
[08:42:53][Step 2/2] I181018 08:37:54.184927 48112 storage/replica_raftstorage.go:810 [s3,r2/3:{a-d}] applied Raft snapshot in 4ms [clear=1ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:37:54.406021 48287 storage/store_snapshot.go:621 [raftsnapshot,s2,r5/2:{d-/Max}] sending Raft snapshot 86c27914 at applied index 10
[08:42:53][Step 2/2] I181018 08:37:54.407370 48287 storage/store_snapshot.go:664 [raftsnapshot,s2,r5/2:{d-/Max}] streamed snapshot to (n3,s3):3: kv pairs: 42, log entries: 0, rate-limit: 8.0 MiB/sec, 5ms
[08:42:53][Step 2/2] I181018 08:37:54.409482 48266 storage/replica_raftstorage.go:804 [s3,r5/3:{-}] applying Raft snapshot at index 10 (id=86c27914, encoded size=7473, 1 rocksdb batches, 0 log entries)
[08:42:53][Step 2/2] I181018 08:37:54.411107 48266 storage/replica_raftstorage.go:810 [s3,r5/3:{d-/Max}] applied Raft snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeRaftSnapshot (1.20s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeDuringShutdown
[08:42:53][Step 2/2] I181018 08:37:54.545892 48297 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:46749" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:54.571870 48297 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] I181018 08:37:54.602391 48297 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:53][Step 2/2] I181018 08:37:54.607437 48412 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] storage.pendingLeaseRequest: requesting lease
[08:42:53][Step 2/2] 1 [async] kv.TxnCoordSender: heartbeat loop
[08:42:53][Step 2/2] I181018 08:37:54.607746 48412 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] storage.pendingLeaseRequest: requesting lease
[08:42:53][Step 2/2] I181018 08:37:54.617803 48310 storage/replica_proposal.go:212 [s1,r2/1:{b-/Max}] new range lease repl=(n1,s1):1 seq=2 start=0.000000000,0 epo=1 pro=1.800000125,6 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
[08:42:53][Step 2/2] I181018 08:37:54.620752 48297 storage/client_test.go:1252 test clock advanced to: 3.600000127,0
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeDuringShutdown (0.20s)
[08:42:53][Step 2/2] === RUN TestMergeQueue
[08:42:53][Step 2/2] I181018 08:37:54.750946 48419 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:35241" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:37:54.790493 48419 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:37:54.791279 48419 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:34611" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:37:54.792759 48626 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:35241
[08:42:53][Step 2/2] I181018 08:37:54.870107 48419 storage/client_test.go:421 gossip network initialized
[08:42:53][Step 2/2] I181018 08:37:54.875232 48419 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r2]
[08:42:53][Step 2/2] I181018 08:37:54.899039 48419 storage/replica_command.go:300 [s1,r2/1:{a-/Max}] initiating a split of this range at key "b" [r3]
[08:42:53][Step 2/2] I181018 08:37:54.923962 48419 storage/replica_command.go:300 [s1,r3/1:{b-/Max}] initiating a split of this range at key "c" [r4]
[08:42:53][Step 2/2] === RUN TestMergeQueue/sanity
[08:42:53][Step 2/2] I181018 08:37:55.445758 48678 storage/replica_command.go:432 [merge,s1,r2/1:{a-b}] initiating a merge of r3:{b-c} [(n1,s1):1, next=2, gen=1] into this range
[08:42:53][Step 2/2] I181018 08:37:55.906300 48466 storage/store.go:2580 [merge,s1,r2/1:{a-b},txn=e8861a50] removing replica r3/1
[08:42:53][Step 2/2] I181018 08:37:56.421286 48692 storage/replica_command.go:300 [s1,r2/1:{a-c}] initiating a split of this range at key "b" [r5]
[08:42:53][Step 2/2] I181018 08:37:56.470501 48725 storage/replica_command.go:432 [merge,s1,r2/1:{a-b}] initiating a merge of r5:{b-c} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:56.779484 48513 gossip/gossip.go:1510 [n2] node has connected to cluster via gossip
[08:42:53][Step 2/2] I181018 08:37:56.937072 48435 storage/store.go:2580 [merge,s1,r2/1:{a-b},txn=db2daf38] removing replica r5/1
[08:42:53][Step 2/2] I181018 08:37:57.497181 48692 storage/replica_command.go:300 [s1,r2/1:{a-c}] initiating a split of this range at key "b" [r6]
[08:42:53][Step 2/2] === RUN TestMergeQueue/both-empty
[08:42:53][Step 2/2] I181018 08:37:57.562522 48634 storage/replica_command.go:432 [merge,s1,r2/1:{a-b}] initiating a merge of r6:{b-c} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:58.511412 48400 storage/store.go:2580 [merge,s1,r2/1:{a-b},txn=4b17af4c] removing replica r6/1
[08:42:53][Step 2/2] I181018 08:37:58.579710 48729 storage/replica_command.go:300 [s1,r2/1:{a-c}] initiating a split of this range at key "b" [r7]
[08:42:53][Step 2/2] I181018 08:37:58.636169 48729 storage/replica_command.go:432 [merge,s1,r2/1:{a-b}] initiating a merge of r7:{b-c} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:58.661042 48444 storage/store.go:2580 [merge,s1,r2/1:{a-b},txn=44b54093] removing replica r7/1
[08:42:53][Step 2/2] === RUN TestMergeQueue/lhs-undersize
[08:42:53][Step 2/2] I181018 08:37:59.621312 48642 storage/replica_command.go:300 [s1,r2/1:{a-c}] initiating a split of this range at key "b" [r8]
[08:42:53][Step 2/2] I181018 08:37:59.665783 48642 storage/replica_command.go:432 [merge,s1,r2/1:{a-b}] initiating a merge of r8:{b-c} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:37:59.699704 48473 storage/store.go:2580 [merge,s1,r2/1:{a-b},txn=e53013b1] removing replica r8/1
[08:42:53][Step 2/2] === RUN TestMergeQueue/combined-threshold
[08:42:53][Step 2/2] I181018 08:38:00.718995 48688 storage/replica_command.go:300 [s1,r2/1:{a-c}] initiating a split of this range at key "b" [r9]
[08:42:53][Step 2/2] I181018 08:38:00.771352 48688 storage/replica_command.go:432 [merge,s1,r2/1:{a-b}] initiating a merge of r9:{b-c} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:38:00.802439 48442 storage/store.go:2580 [merge,s1,r2/1:{a-b},txn=cf0e7534] removing replica r9/1
[08:42:53][Step 2/2] === RUN TestMergeQueue/non-collocated
[08:42:53][Step 2/2] I181018 08:38:01.778705 48738 storage/replica_command.go:300 [s1,r2/1:{a-c}] initiating a split of this range at key "b" [r10]
[08:42:53][Step 2/2] I181018 08:38:01.812140 48738 storage/store_snapshot.go:621 [s1,r10/1:{b-c}] sending preemptive snapshot 12d89bd0 at applied index 11
[08:42:53][Step 2/2] I181018 08:38:01.846787 48738 storage/store_snapshot.go:664 [s1,r10/1:{b-c}] streamed snapshot to (n2,s2):?: kv pairs: 20, log entries: 1, rate-limit: 2.0 MiB/sec, 37ms
[08:42:53][Step 2/2] I181018 08:38:01.869354 48697 storage/replica_raftstorage.go:804 [s2,r10/?:{-}] applying preemptive snapshot at index 11 (id=12d89bd0, encoded size=1049415, 1 rocksdb batches, 1 log entries)
[08:42:53][Step 2/2] I181018 08:38:01.880769 48697 storage/replica_raftstorage.go:810 [s2,r10/?:{b-c}] applied preemptive snapshot in 11ms [clear=0ms batch=6ms entries=0ms commit=3ms]
[08:42:53][Step 2/2] I181018 08:38:01.885569 48738 storage/replica_command.go:816 [s1,r10/1:{b-c}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r10:{b-c} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:38:01.899614 48738 storage/replica.go:3884 [s1,r10/1:{b-c},txn=93a22851] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:38:01.939191 48548 storage/replica_proposal.go:212 [s2,r10/2:{b-c}] new range lease repl=(n2,s2):2 seq=2 start=0.000000123,2411 epo=1 pro=0.000000123,2412 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
[08:42:53][Step 2/2] I181018 08:38:02.059708 48738 storage/replica_command.go:816 [s2,r10/2:{b-c}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r10:{b-c} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:38:02.074377 48738 storage/replica.go:3884 [s2,r10/2:{b-c},txn=ca065bd8] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:38:02.085266 48780 storage/store.go:3640 [s1,r10/1:{b-c}] added to replica GC queue (peer suggestion)
[08:42:53][Step 2/2] I181018 08:38:02.088380 48820 storage/store.go:2580 [replicaGC,s1,r10/1:{b-c}] removing replica r10/1
[08:42:53][Step 2/2] I181018 08:38:02.089808 48820 storage/replica.go:863 [replicaGC,s1,r10/1:{b-c}] removed 7 (1+6) keys in 1ms [clear=0ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:02.103686 48784 storage/store_snapshot.go:621 [merge,s2,r10/2:{b-c}] sending preemptive snapshot 45c0c641 at applied index 21
[08:42:53][Step 2/2] I181018 08:38:02.104777 48784 storage/store_snapshot.go:664 [merge,s2,r10/2:{b-c}] streamed snapshot to (n1,s1):?: kv pairs: 21, log entries: 11, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:38:02.107148 48699 storage/replica_raftstorage.go:804 [s1,r10/?:{-}] applying preemptive snapshot at index 21 (id=45c0c641, encoded size=2687, 1 rocksdb batches, 11 log entries)
[08:42:53][Step 2/2] I181018 08:38:02.112317 48699 storage/replica_raftstorage.go:810 [s1,r10/?:{b-c}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:02.118139 48784 storage/replica_command.go:816 [merge,s2,r10/2:{b-c}] change replicas (ADD_REPLICA (n1,s1):3): read existing descriptor r10:{b-c} [(n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:38:02.129705 48784 storage/replica.go:3884 [merge,s2,r10/2:{b-c},txn=b3e706e8] proposing ADD_REPLICA((n1,s1):3): updated=[(n2,s2):2 (n1,s1):3] next=4
[08:42:53][Step 2/2] I181018 08:38:02.148685 48442 storage/replica_proposal.go:212 [s1,r10/3:{b-c}] new range lease repl=(n1,s1):3 seq=3 start=0.000000123,2626 epo=1 pro=0.000000123,2627 following repl=(n2,s2):2 seq=2 start=0.000000123,2411 epo=1 pro=0.000000123,2412
[08:42:53][Step 2/2] I181018 08:38:02.149679 48852 storage/queue.go:876 [replicate] purgatory is now empty
[08:42:53][Step 2/2] I181018 08:38:02.161867 48784 storage/replica_command.go:816 [merge,s1,r10/3:{b-c}] change replicas (REMOVE_REPLICA (n2,s2):2): read existing descriptor r10:{b-c} [(n2,s2):2, (n1,s1):3, next=4, gen=0]
[08:42:53][Step 2/2] I181018 08:38:02.175452 48784 storage/replica.go:3884 [merge,s1,r10/3:{b-c},txn=6981177d] proposing REMOVE_REPLICA((n2,s2):2): updated=[(n1,s1):3] next=4
[08:42:53][Step 2/2] I181018 08:38:02.184896 48782 storage/store.go:3640 [s2,r10/2:{b-c}] added to replica GC queue (peer suggestion)
[08:42:53][Step 2/2] I181018 08:38:02.186233 48784 storage/replica_command.go:432 [merge,s1,r2/1:{a-b}] initiating a merge of r10:{b-c} [(n1,s1):3, next=4, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:38:02.186823 48815 storage/store.go:2580 [replicaGC,s2,r10/2:{b-c}] removing replica r10/2
[08:42:53][Step 2/2] I181018 08:38:02.188607 48815 storage/replica.go:863 [replicaGC,s2,r10/2:{b-c}] removed 7 (0+7) keys in 1ms [clear=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:02.214125 48458 storage/store.go:2580 [merge,s1,r2/1:{a-b},txn=822d944c] removing replica r10/3
[08:42:53][Step 2/2] I181018 08:38:02.219360 48705 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.replicate: purgatory processing replica
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] I181018 08:38:02.219704 48705 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] W181018 08:38:02.220160 48703 storage/intent_resolver.go:745 [s1] failed to cleanup transaction intents: failed to resolve intents: result is ambiguous (server shutdown)
[08:42:53][Step 2/2] W181018 08:38:02.236511 48205 storage/raft_transport.go:584 while processing outgoing Raft queue to node 2: rpc error: code = Unavailable desc = transport is closing:
[08:42:53][Step 2/2] --- PASS: TestMergeQueue (7.56s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueue/sanity (2.10s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueue/both-empty (1.13s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueue/lhs-undersize (1.04s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueue/combined-threshold (1.10s)
[08:42:53][Step 2/2] --- PASS: TestMergeQueue/non-collocated (1.41s)
[08:42:53][Step 2/2] === RUN TestInvalidSubsumeRequest
[08:42:53][Step 2/2] I181018 08:38:02.294388 48823 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:39515" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:02.314971 48823 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:53][Step 2/2] --- PASS: TestInvalidSubsumeRequest (0.11s)
[08:42:53][Step 2/2] === RUN TestStoreRangeMergeClusterVersion
[08:42:53][Step 2/2] I181018 08:38:02.424811 48973 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41065" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:02.446538 48973 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b2" [r2]
[08:42:53][Step 2/2] I181018 08:38:02.467636 48973 storage/replica_command.go:300 [s1,r1/1:{/Min-b2}] initiating a split of this range at key "b1" [r3]
[08:42:53][Step 2/2] I181018 08:38:02.490313 48973 storage/replica_command.go:432 [s1,r1/1:{/Min-b1}] initiating a merge of r3:b{1-2} [(n1,s1):1, next=2, gen=0] into this range
[08:42:53][Step 2/2] I181018 08:38:02.518062 49003 storage/store.go:2580 [s1,r1/1:{/Min-b1},txn=cf57090f] removing replica r3/1
[08:42:53][Step 2/2] --- PASS: TestStoreRangeMergeClusterVersion (0.17s)
[08:42:53][Step 2/2] === RUN TestStoreResolveMetrics
[08:42:53][Step 2/2] I181018 08:38:02.600103 49099 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:39811" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:38:02.684231 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.684556 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.684789 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.685004 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.685204 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.685396 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.685565 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.685805 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.686006 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.686192 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.686387 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.686599 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.686821 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.687014 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.687203 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.687406 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.687627 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.687861 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.688094 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.688291 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.688492 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.688735 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.688942 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.689123 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.689289 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.689484 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.689686 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.689880 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.690048 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.690205 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.690365 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.690535 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.690739 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.690874 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.691012 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.691208 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.691387 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.691570 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.691812 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.691994 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.692185 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.692344 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.692497 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.692694 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.692872 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.693055 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.693250 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.693401 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.693545 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.693738 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.693893 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.694050 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.694211 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.694382 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.694541 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.694737 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.694940 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.695146 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.695316 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.695487 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.695676 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.695854 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.696020 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.696180 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.696367 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.696524 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.696712 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.696871 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.697045 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.697214 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.697382 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.697545 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.697732 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.697894 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.698061 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.698224 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.698389 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.698552 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.698748 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.698906 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.699101 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.699298 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.699462 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.699651 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.699818 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.699978 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.700160 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.700333 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.700484 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.700667 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.700868 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.701039 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.701200 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.701357 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.701527 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.701702 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.701857 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.702010 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.702115 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.702231 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] W181018 08:38:02.702357 49099 storage/engine/mvcc.go:2192 [s1,r1/1:/M{in-ax}] unable to find value for "a" ({ID:b097a3db-a0ce-4abc-a41d-c47b6ba5723a Isolation:SERIALIZABLE Key:[97] Epoch:0 Timestamp:0.000000123,0 Priority:0 Sequence:0 DeprecatedBatchIndex:0})
[08:42:53][Step 2/2] --- PASS: TestStoreResolveMetrics (0.35s)
[08:42:53][Step 2/2] === RUN TestStoreMetrics
[08:42:53][Step 2/2] I181018 08:38:02.928876 49240 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:46553" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:38:02.973317 49240 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:02.974350 49240 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:39947" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:02.977330 49342 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:46553
[08:42:53][Step 2/2] W181018 08:38:03.025175 49240 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:03.025998 49240 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:45455" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:03.027612 49578 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:46553
[08:42:53][Step 2/2] I181018 08:38:03.049168 49240 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "m" [r2]
[08:42:53][Step 2/2] I181018 08:38:03.069111 49560 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.replicate: purgatory processing replica
[08:42:53][Step 2/2] I181018 08:38:03.114936 49607 storage/replica_proposal.go:212 [s1,r2/1:{m-/Max}] new range lease repl=(n1,s1):1 seq=2 start=0.000000123,182 epo=1 pro=0.000000123,197 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
[08:42:53][Step 2/2] I181018 08:38:03.154891 49240 storage/store_snapshot.go:621 [s1,r2/1:{m-/Max}] sending preemptive snapshot c31df918 at applied index 14
[08:42:53][Step 2/2] I181018 08:38:03.156259 49240 storage/store_snapshot.go:664 [s1,r2/1:{m-/Max}] streamed snapshot to (n2,s2):?: kv pairs: 42, log entries: 4, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:38:03.157907 49703 storage/replica_raftstorage.go:804 [s2,r2/?:{-}] applying preemptive snapshot at index 14 (id=c31df918, encoded size=7628, 1 rocksdb batches, 4 log entries)
[08:42:53][Step 2/2] I181018 08:38:03.161249 49703 storage/replica_raftstorage.go:810 [s2,r2/?:{m-/Max}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:03.174166 49613 storage/replica_proposal.go:212 [s1,r2/1:{m-/Max}] new range lease repl=(n1,s1):1 seq=3 start=0.000000123,182 epo=2 pro=0.000000123,347 following repl=(n1,s1):1 seq=2 start=0.000000123,182 epo=1 pro=0.000000123,197
[08:42:53][Step 2/2] I181018 08:38:03.176453 49240 storage/replica_command.go:816 [s1,r2/1:{m-/Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:{m-/Max} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:38:03.185861 49240 storage/replica.go:3884 [s1,r2/1:{m-/Max},txn=403b4849] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:38:03.194694 49240 storage/store_snapshot.go:621 [s1,r2/1:{m-/Max}] sending preemptive snapshot 29a26df9 at applied index 17
[08:42:53][Step 2/2] I181018 08:38:03.196363 49240 storage/store_snapshot.go:664 [s1,r2/1:{m-/Max}] streamed snapshot to (n3,s3):?: kv pairs: 44, log entries: 7, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:38:03.198087 49472 storage/replica_raftstorage.go:804 [s3,r2/?:{-}] applying preemptive snapshot at index 17 (id=29a26df9, encoded size=8657, 1 rocksdb batches, 7 log entries)
[08:42:53][Step 2/2] I181018 08:38:03.202541 49472 storage/replica_raftstorage.go:810 [s3,r2/?:{m-/Max}] applied preemptive snapshot in 4ms [clear=1ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:03.207759 49240 storage/replica_command.go:816 [s1,r2/1:{m-/Max}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r2:{m-/Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:38:03.229706 49240 storage/replica.go:3884 [s1,r2/1:{m-/Max},txn=d869c241] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] I181018 08:38:03.621918 49943 storage/replica_proposal.go:212 [s1,r2/1:{m-/Max}] new range lease repl=(n1,s1):1 seq=4 start=0.000000123,500 epo=2 pro=0.000000123,506 following repl=(n1,s1):1 seq=3 start=0.000000123,182 epo=2 pro=0.000000123,347
[08:42:53][Step 2/2] I181018 08:38:03.640973 49949 storage/replica_proposal.go:212 [s1,r2/1:{m-/Max}] new range lease repl=(n1,s1):1 seq=5 start=0.000000123,500 epo=3 pro=0.000000123,703 following repl=(n1,s1):1 seq=4 start=0.000000123,500 epo=2 pro=0.000000123,506
[08:42:53][Step 2/2] I181018 08:38:04.106112 50213 storage/replica_proposal.go:212 [s1,r2/1:{m-/Max}] new range lease repl=(n1,s1):1 seq=6 start=0.000000123,773 epo=3 pro=0.000000123,781 following repl=(n1,s1):1 seq=5 start=0.000000123,500 epo=3 pro=0.000000123,703
[08:42:53][Step 2/2] I181018 08:38:04.125277 49697 storage/replica_proposal.go:212 [s1,r2/1:{m-/Max}] new range lease repl=(n1,s1):1 seq=7 start=0.000000123,773 epo=4 pro=0.000000123,990 following repl=(n1,s1):1 seq=6 start=0.000000123,773 epo=3 pro=0.000000123,781
[08:42:53][Step 2/2] I181018 08:38:04.133153 50354 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 [async] kv.TxnCoordSender: heartbeat loop
[08:42:53][Step 2/2] I181018 08:38:04.583192 50496 storage/replica_proposal.go:212 [s1,r2/1:{m-/Max}] new range lease repl=(n1,s1):1 seq=8 start=0.000000123,1048 epo=4 pro=0.000000123,1057 following repl=(n1,s1):1 seq=7 start=0.000000123,773 epo=4 pro=0.000000123,990
[08:42:53][Step 2/2] I181018 08:38:04.599839 49450 gossip/gossip.go:1510 [n3] node has connected to cluster via gossip
[08:42:53][Step 2/2] I181018 08:38:04.602325 50501 storage/replica_proposal.go:212 [s1,r2/1:{m-/Max}] new range lease repl=(n1,s1):1 seq=9 start=0.000000123,1048 epo=5 pro=0.000000123,1266 following repl=(n1,s1):1 seq=8 start=0.000000123,1048 epo=4 pro=0.000000123,1057
[08:42:53][Step 2/2] I181018 08:38:04.617394 50600 storage/replica_proposal.go:212 [s2,r2/2:{m-/Max}] new range lease repl=(n2,s2):2 seq=10 start=0.000000123,1316 epo=5 pro=0.000000123,1317 following repl=(n1,s1):1 seq=9 start=0.000000123,1048 epo=5 pro=0.000000123,1266
[08:42:53][Step 2/2] I181018 08:38:04.645244 49240 storage/replica_command.go:816 [s2,r2/2:{m-/Max}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r2:{m-/Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:53][Step 2/2] I181018 08:38:04.666253 49240 storage/replica.go:3884 [s2,r2/2:{m-/Max},txn=303068fd] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n3,s3):3 (n2,s2):2] next=4
[08:42:53][Step 2/2] I181018 08:38:04.674342 49595 storage/store.go:3640 [s1,r2/1:{m-/Max}] added to replica GC queue (peer suggestion)
[08:42:53][Step 2/2] I181018 08:38:04.678975 50479 storage/store.go:2580 [replicaGC,s1,r2/1:{m-/Max}] removing replica r2/1
[08:42:53][Step 2/2] I181018 08:38:04.680855 50479 storage/replica.go:863 [replicaGC,s1,r2/1:{m-/Max}] removed 43 (37+6) keys in 1ms [clear=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:04.682268 50481 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:53][Step 2/2] W181018 08:38:04.683264 50745 storage/intent_resolver.go:745 [s2] failed to cleanup transaction intents: could not GC completed transaction anchored at /Local/Range"m"/RangeDescriptor: result is ambiguous (server shutdown)
[08:42:53][Step 2/2] I181018 08:38:04.715314 49443 gossip/gossip.go:1510 [n2] node has connected to cluster via gossip
[08:42:53][Step 2/2] I181018 08:38:04.774996 50561 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.Replica: acquiring lease to gossip
[08:42:53][Step 2/2] 1 [async] storage.pendingLeaseRequest: requesting lease
[08:42:53][Step 2/2] I181018 08:38:04.775919 50561 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.Replica: acquiring lease to gossip
[08:42:53][Step 2/2] W181018 08:38:04.776700 50910 storage/store.go:1490 [s2,r2/2:{m-/Max}] could not gossip system config: [NotLeaseHolderError] r2: replica (n2,s2):2 not lease holder; lease holder unknown
[08:42:53][Step 2/2] W181018 08:38:04.777766 50825 storage/node_liveness.go:454 [hb] failed node liveness heartbeat: node unavailable; try another peer
[08:42:53][Step 2/2] W181018 08:38:04.780023 50918 internal/client/txn.go:532 [hb] failure aborting transaction: node unavailable; try another peer; abort caused by: node unavailable; try another peer
[08:42:53][Step 2/2] I181018 08:38:04.780053 51020 internal/client/txn.go:637 async rollback failed: node unavailable; try another peer
[08:42:53][Step 2/2] W181018 08:38:04.780470 50918 storage/node_liveness.go:454 [hb] failed node liveness heartbeat: node unavailable; try another peer
[08:42:53][Step 2/2] W181018 08:38:04.780690 50935 internal/client/txn.go:532 [hb] failure aborting transaction: node unavailable; try another peer; abort caused by: node unavailable; try another peer
[08:42:53][Step 2/2] W181018 08:38:04.781175 50935 storage/node_liveness.go:454 [hb] failed node liveness heartbeat: node unavailable; try another peer
[08:42:53][Step 2/2] --- PASS: TestStoreMetrics (1.93s)
[08:42:53][Step 2/2] === RUN TestRaftLogQueue
[08:42:53][Step 2/2] I181018 08:38:04.885396 51032 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:46421" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:38:04.922262 51032 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:04.923310 51032 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:36957" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:04.924754 51132 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:46421
[08:42:53][Step 2/2] W181018 08:38:04.968479 51032 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:04.969341 51032 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:34241" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:04.970598 51363 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:46421
[08:42:53][Step 2/2] --- PASS: TestRaftLogQueue (0.64s)
[08:42:53][Step 2/2] === RUN TestStoreRecoverFromEngine
[08:42:53][Step 2/2] I181018 08:38:05.454855 51280 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:05.522799 51280 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "m" [r2]
[08:42:53][Step 2/2] I181018 08:38:05.554491 51280 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] --- PASS: TestStoreRecoverFromEngine (0.15s)
[08:42:53][Step 2/2] === RUN TestStoreRecoverWithErrors
[08:42:53][Step 2/2] I181018 08:38:05.616217 51254 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:05.678257 51254 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] --- PASS: TestStoreRecoverWithErrors (0.13s)
[08:42:53][Step 2/2] === RUN TestReplicateRange
[08:42:53][Step 2/2] I181018 08:38:05.781705 51376 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:38003" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:38:05.823983 51376 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:05.824918 51376 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:37681" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:05.827464 51872 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:38003
[08:42:53][Step 2/2] I181018 08:38:05.847930 51376 storage/store_snapshot.go:621 sending preemptive snapshot 6e1e3360 at applied index 16
[08:42:53][Step 2/2] I181018 08:38:05.849464 51376 storage/store_snapshot.go:664 streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:38:05.851211 51874 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=6e1e3360, encoded size=8290, 1 rocksdb batches, 6 log entries)
[08:42:53][Step 2/2] I181018 08:38:05.854534 51874 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:05.857265 51376 storage/replica_command.go:816 change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:38:05.865004 51376 storage/replica.go:3884 [txn=06348ba3,s1,r1/1:/M{in-ax}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] --- PASS: TestReplicateRange (0.17s)
[08:42:53][Step 2/2] === RUN TestRestoreReplicas
[08:42:53][Step 2/2] W181018 08:38:05.946916 52073 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip system config: periodic gossip is disabled
[08:42:53][Step 2/2] W181018 08:38:05.947164 52074 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip node liveness: periodic gossip is disabled
[08:42:53][Step 2/2] I181018 08:38:05.958925 51961 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:46647" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:38:06.011014 51961 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:06.012354 51961 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45905" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:06.014029 52100 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:46647
[08:42:53][Step 2/2] I181018 08:38:06.033999 51961 storage/store_snapshot.go:621 sending preemptive snapshot bb9d2c45 at applied index 16
[08:42:53][Step 2/2] I181018 08:38:06.035602 51961 storage/store_snapshot.go:664 streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:38:06.036847 52064 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=bb9d2c45, encoded size=8290, 1 rocksdb batches, 6 log entries)
[08:42:53][Step 2/2] I181018 08:38:06.038959 52064 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:06.041299 51961 storage/replica_command.go:816 change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:38:06.048125 51961 storage/replica.go:3884 [txn=f46aa8d5,s1,r1/1:/M{in-ax}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] W181018 08:38:06.581959 52330 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip system config: periodic gossip is disabled
[08:42:53][Step 2/2] W181018 08:38:06.582084 52331 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip node liveness: periodic gossip is disabled
[08:42:53][Step 2/2] W181018 08:38:06.599840 52425 storage/store.go:1490 [s2,r1/2:/M{in-ax}] could not gossip system config: periodic gossip is disabled
[08:42:53][Step 2/2] W181018 08:38:06.599858 52426 storage/store.go:1490 [s2,r1/2:/M{in-ax}] could not gossip node liveness: periodic gossip is disabled
[08:42:53][Step 2/2] --- PASS: TestRestoreReplicas (0.75s)
[08:42:53][Step 2/2] === RUN TestFailedReplicaChange
[08:42:53][Step 2/2] I181018 08:38:06.699261 52436 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:35267" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:38:06.736878 52436 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:06.737828 52436 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:43133" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:06.739176 52110 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:35267
[08:42:53][Step 2/2] I181018 08:38:06.753941 52436 storage/store_snapshot.go:621 sending preemptive snapshot e7b706dd at applied index 15
[08:42:53][Step 2/2] I181018 08:38:06.755219 52436 storage/store_snapshot.go:664 streamed snapshot to (n2,s2):?: kv pairs: 48, log entries: 5, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:38:06.756243 52662 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 15 (id=e7b706dd, encoded size=8165, 1 rocksdb batches, 5 log entries)
[08:42:53][Step 2/2] I181018 08:38:06.758260 52662 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:06.760721 52436 storage/replica_command.go:816 change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:38:06.765963 52436 storage/replica_command.go:75 [txn=5b6e9745,s1,r1/1:/M{in-ax}] test injecting error: boom
[08:42:53][Step 2/2] I181018 08:38:06.772417 52436 storage/store_snapshot.go:621 sending preemptive snapshot fd311122 at applied index 17
[08:42:53][Step 2/2] I181018 08:38:06.773713 52436 storage/store_snapshot.go:664 streamed snapshot to (n2,s2):?: kv pairs: 48, log entries: 7, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:38:06.774400 52540 storage/replica_raftstorage.go:804 [s2,r1/?:/M{in-ax}] applying preemptive snapshot at index 17 (id=fd311122, encoded size=8621, 1 rocksdb batches, 7 log entries)
[08:42:53][Step 2/2] I181018 08:38:06.776886 52540 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:06.785661 52436 storage/replica_command.go:816 change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:38:06.792843 52436 storage/replica.go:3884 [txn=f37275e3,s1,r1/1:/M{in-ax}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] W181018 08:38:06.956395 52638 storage/replica.go:6500 [s2,r1/?:/M{in-ax}] could not acquire lease for range gossip: r1 was not found on s2
[08:42:53][Step 2/2] W181018 08:38:06.956694 52638 storage/store.go:1490 [s2,r1/?:/M{in-ax}] could not gossip first range descriptor: r1 was not found on s2
[08:42:53][Step 2/2] --- PASS: TestFailedReplicaChange (0.44s)
[08:42:53][Step 2/2] === RUN TestReplicateAfterTruncation
[08:42:53][Step 2/2] I181018 08:38:07.134777 52112 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:38775" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:38:07.173721 52112 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:07.174677 52112 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:46153" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:07.178321 52690 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:38775
[08:42:53][Step 2/2] I181018 08:38:07.210749 52112 storage/store_snapshot.go:621 sending preemptive snapshot eb910c83 at applied index 18
[08:42:53][Step 2/2] I181018 08:38:07.212211 52112 storage/store_snapshot.go:664 streamed snapshot to (n2,s2):?: kv pairs: 50, log entries: 2, rate-limit: 2.0 MiB/sec, 7ms
[08:42:53][Step 2/2] I181018 08:38:07.221029 52711 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 18 (id=eb910c83, encoded size=7975, 1 rocksdb batches, 2 log entries)
[08:42:53][Step 2/2] I181018 08:38:07.223387 52711 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:07.227206 52112 storage/replica_command.go:816 change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:38:07.236303 52112 storage/replica.go:3884 [txn=34b3eddb,s1,r1/1:/M{in-ax}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] --- PASS: TestReplicateAfterTruncation (0.46s)
[08:42:53][Step 2/2] === RUN TestRaftLogSizeAfterTruncation
[08:42:53][Step 2/2] I181018 08:38:07.599495 52666 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:40251" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:38:07.635843 52666 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:07.636963 52666 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:34631" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:07.638545 52904 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:40251
[08:42:53][Step 2/2] W181018 08:38:07.674350 52666 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:07.675443 52666 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:39755" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:07.678856 53257 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:40251
[08:42:53][Step 2/2] I181018 08:38:07.703277 52666 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot d9f44c56 at applied index 16
[08:42:53][Step 2/2] I181018 08:38:07.704625 52666 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 6ms
[08:42:53][Step 2/2] I181018 08:38:07.706335 52944 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=d9f44c56, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:53][Step 2/2] I181018 08:38:07.710060 52944 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:07.714039 52666 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:38:07.724534 52666 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=c614b265] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:38:07.742660 52666 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 2b094186 at applied index 18
[08:42:53][Step 2/2] I181018 08:38:07.744411 52666 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:38:07.757142 52352 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=2b094186, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:53][Step 2/2] I181018 08:38:07.762451 52352 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=4ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:07.766881 52666 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:38:07.791948 52666 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=2878dbea] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] --- PASS: TestRaftLogSizeAfterTruncation (0.58s)
[08:42:53][Step 2/2] === RUN TestSnapshotAfterTruncation
[08:42:53][Step 2/2] === RUN TestSnapshotAfterTruncation/sameTerm
[08:42:53][Step 2/2] I181018 08:38:08.189217 53282 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:36851" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:38:08.233663 53282 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:08.234796 53282 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:37203" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:08.236373 53530 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:36851
[08:42:53][Step 2/2] W181018 08:38:08.282245 53282 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:08.283109 53282 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:43685" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:08.285171 53553 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:36851
[08:42:53][Step 2/2] I181018 08:38:08.310145 53282 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 87687b31 at applied index 17
[08:42:53][Step 2/2] I181018 08:38:08.311555 53282 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 50, log entries: 7, rate-limit: 2.0 MiB/sec, 5ms
[08:42:53][Step 2/2] I181018 08:38:08.313774 53668 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 17 (id=87687b31, encoded size=8464, 1 rocksdb batches, 7 log entries)
[08:42:53][Step 2/2] I181018 08:38:08.318254 53668 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:08.321308 53282 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:38:08.330206 53282 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=308b6eba] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:38:08.341425 53282 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot c7c854b0 at applied index 19
[08:42:53][Step 2/2] I181018 08:38:08.343171 53282 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 53, log entries: 9, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:38:08.345148 53659 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 19 (id=c7c854b0, encoded size=9406, 1 rocksdb batches, 9 log entries)
[08:42:53][Step 2/2] I181018 08:38:08.349103 53659 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:08.351816 53282 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:38:08.374845 53282 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=d15bb756] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] I181018 08:38:08.756078 53533 storage/store_snapshot.go:621 [raftsnapshot,s1,r1/1:/M{in-ax}] sending Raft snapshot 04c5a8f8 at applied index 26
[08:42:53][Step 2/2] I181018 08:38:08.757592 53533 storage/store_snapshot.go:664 [raftsnapshot,s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):2: kv pairs: 60, log entries: 3, rate-limit: 8.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:38:08.758420 53535 storage/replica_raftstorage.go:804 [s2,r1/2:/M{in-ax}] applying Raft snapshot at index 26 (id=04c5a8f8, encoded size=8664, 1 rocksdb batches, 3 log entries)
[08:42:53][Step 2/2] I181018 08:38:08.760559 53535 storage/replica_raftstorage.go:810 [s2,r1/2:/M{in-ax}] applied Raft snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] W181018 08:38:08.812465 53295 storage/raft_transport.go:584 while processing outgoing Raft queue to node 3: EOF:
[08:42:53][Step 2/2] === RUN TestSnapshotAfterTruncation/differentTerm
[08:42:53][Step 2/2] I181018 08:38:08.877699 53781 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:37361" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:38:08.923601 53781 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:08.924339 53781 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:43925" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:08.927563 54007 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:37361
[08:42:53][Step 2/2] W181018 08:38:09.013528 53781 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:09.014389 53781 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:39885" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:09.016107 54119 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:37361
[08:42:53][Step 2/2] I181018 08:38:09.041466 53781 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 7ee47fc7 at applied index 17
[08:42:53][Step 2/2] I181018 08:38:09.042972 53781 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 50, log entries: 7, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:38:09.045198 54136 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 17 (id=7ee47fc7, encoded size=8464, 1 rocksdb batches, 7 log entries)
[08:42:53][Step 2/2] I181018 08:38:09.048686 54136 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:09.052566 53781 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:38:09.067560 53781 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=600034b3] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:38:09.081112 53781 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot afb7d340 at applied index 19
[08:42:53][Step 2/2] I181018 08:38:09.082605 53781 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 53, log entries: 9, rate-limit: 2.0 MiB/sec, 5ms
[08:42:53][Step 2/2] I181018 08:38:09.084556 54015 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 19 (id=afb7d340, encoded size=9406, 1 rocksdb batches, 9 log entries)
[08:42:53][Step 2/2] I181018 08:38:09.088879 54015 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:09.096396 53781 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:38:09.116101 53781 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6eddfafa] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] E181018 08:38:09.787057 54268 storage/replica.go:5056 [s3,r1/3:/M{in-ax}] unable to add replica to Raft repair queue: queue disabled
[08:42:53][Step 2/2] I181018 08:38:09.794380 54257 storage/client_test.go:1252 [hb,txn=8c457ef3,range-lookup=/Meta2/System/NodeLiveness/1/NULL] test clock advanced to: 1.800000125,0
[08:42:53][Step 2/2] W181018 08:38:09.797597 54255 storage/node_liveness.go:454 [hb] failed node liveness heartbeat: aborted in distSender: context deadline exceeded
[08:42:53][Step 2/2] E181018 08:38:09.829017 54219 storage/replica.go:5056 [s1,r1/1:/M{in-ax}] unable to add replica to Raft repair queue: queue disabled
[08:42:53][Step 2/2] I181018 08:38:09.833488 54337 storage/node_liveness.go:790 [hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context deadline exceeded)
[08:42:53][Step 2/2] W181018 08:38:09.833899 54337 storage/node_liveness.go:454 [hb] failed node liveness heartbeat: context deadline exceeded
[08:42:53][Step 2/2] I181018 08:38:09.845747 54337 storage/node_liveness.go:451 [hb] heartbeat failed on epoch increment; retrying
[08:42:53][Step 2/2] I181018 08:38:09.946352 53781 storage/store_snapshot.go:621 [raftsnapshot,s1,r1/1:/M{in-ax}] sending Raft snapshot 4f021f90 at applied index 40
[08:42:53][Step 2/2] I181018 08:38:09.948108 53781 storage/store_snapshot.go:664 [raftsnapshot,s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):2: kv pairs: 62, log entries: 18, rate-limit: 8.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:38:09.948686 53665 storage/replica_raftstorage.go:804 [s2,r1/2:/M{in-ax}] applying Raft snapshot at index 40 (id=4f021f90, encoded size=10712, 1 rocksdb batches, 18 log entries)
[08:42:53][Step 2/2] I181018 08:38:09.953431 53665 storage/replica_raftstorage.go:810 [s2,r1/2:/M{in-ax}] applied Raft snapshot in 5ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:09.959023 54440 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:53][Step 2/2] 1 storage.Replica: acquiring lease to gossip
[08:42:53][Step 2/2] 1 [async] storage.pendingLeaseRequest: requesting lease
[08:42:53][Step 2/2] --- PASS: TestSnapshotAfterTruncation (1.86s)
[08:42:53][Step 2/2] --- PASS: TestSnapshotAfterTruncation/sameTerm (0.69s)
[08:42:53][Step 2/2] --- PASS: TestSnapshotAfterTruncation/differentTerm (1.17s)
[08:42:53][Step 2/2] === RUN TestFailedSnapshotFillsReservation
[08:42:53][Step 2/2] I181018 08:38:10.039686 54169 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:36961" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:38:10.085067 54169 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:10.086251 54169 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:38393" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:10.087927 54675 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:36961
[08:42:53][Step 2/2] W181018 08:38:10.182093 54169 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:10.183009 54169 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:43599" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:10.184666 54680 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:36961
[08:42:53][Step 2/2] --- PASS: TestFailedSnapshotFillsReservation (0.25s)
[08:42:53][Step 2/2] === RUN TestConcurrentRaftSnapshots
[08:42:53][Step 2/2] I181018 08:38:10.317398 53801 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:37539" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:38:10.361595 53801 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:10.362571 53801 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:32941" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:10.364278 54574 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:37539
[08:42:53][Step 2/2] W181018 08:38:10.411353 53801 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:10.412427 53801 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:39025" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:10.414020 55146 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:37539
[08:42:53][Step 2/2] W181018 08:38:10.466035 53801 gossip/gossip.go:1496 [n4] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:10.467021 53801 gossip/gossip.go:393 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:42693" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:10.468553 54682 gossip/client.go:129 [n4] started gossip client to 127.0.0.1:37539
[08:42:53][Step 2/2] W181018 08:38:10.517043 53801 gossip/gossip.go:1496 [n5] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:10.517958 53801 gossip/gossip.go:393 [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:35825" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:10.521923 55367 gossip/client.go:129 [n5] started gossip client to 127.0.0.1:37539
[08:42:53][Step 2/2] I181018 08:38:10.529812 55009 gossip/server.go:282 [n1] refusing gossip from n5 (max 3 conns); forwarding to n2 ({tcp 127.0.0.1:32941})
[08:42:53][Step 2/2] I181018 08:38:10.538964 55367 gossip/client.go:134 [n5] closing client to n1 (127.0.0.1:37539): received forward from n1 to 2 (127.0.0.1:32941)
[08:42:53][Step 2/2] I181018 08:38:10.541522 55365 gossip/gossip.go:1510 [n5] node has connected to cluster via gossip
[08:42:53][Step 2/2] I181018 08:38:10.543175 55353 gossip/client.go:129 [n5] started gossip client to 127.0.0.1:32941
[08:42:53][Step 2/2] I181018 08:38:10.597203 53801 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 81a112be at applied index 19
[08:42:53][Step 2/2] I181018 08:38:10.598861 53801 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 52, log entries: 9, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:38:10.600956 55385 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 19 (id=81a112be, encoded size=8816, 1 rocksdb batches, 9 log entries)
[08:42:53][Step 2/2] I181018 08:38:10.604042 55385 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:10.606620 53801 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:38:10.614117 53801 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=7f459615] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:38:10.625572 53801 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 386c0d14 at applied index 21
[08:42:53][Step 2/2] I181018 08:38:10.626759 53801 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 55, log entries: 11, rate-limit: 2.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:38:10.628238 55265 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 21 (id=386c0d14, encoded size=9758, 1 rocksdb batches, 11 log entries)
[08:42:53][Step 2/2] I181018 08:38:10.631891 55265 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:10.634671 53801 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:38:10.651229 53801 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=c3bf1442] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] I181018 08:38:10.665706 53801 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 3cc41ea1 at applied index 23
[08:42:53][Step 2/2] I181018 08:38:10.667105 53801 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n4,s4):?: kv pairs: 58, log entries: 13, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:38:10.668641 55034 storage/replica_raftstorage.go:804 [s4,r1/?:{-}] applying preemptive snapshot at index 23 (id=3cc41ea1, encoded size=10765, 1 rocksdb batches, 13 log entries)
[08:42:53][Step 2/2] I181018 08:38:10.673148 55034 storage/replica_raftstorage.go:810 [s4,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:10.675448 53801 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n4,s4):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:53][Step 2/2] I181018 08:38:10.686748 53801 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=f83bf429] proposing ADD_REPLICA((n4,s4):4): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3 (n4,s4):4] next=5
[08:42:53][Step 2/2] I181018 08:38:10.706292 53801 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 1645f43d at applied index 25
[08:42:53][Step 2/2] I181018 08:38:10.707633 53801 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n5,s5):?: kv pairs: 61, log entries: 15, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:38:10.709388 55400 storage/replica_raftstorage.go:804 [s5,r1/?:{-}] applying preemptive snapshot at index 25 (id=1645f43d, encoded size=11836, 1 rocksdb batches, 15 log entries)
[08:42:53][Step 2/2] I181018 08:38:10.713838 55400 storage/replica_raftstorage.go:810 [s5,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:10.716456 53801 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n5,s5):5): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, (n4,s4):4, next=5, gen=0]
[08:42:53][Step 2/2] I181018 08:38:10.728293 53801 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=cd775a52] proposing ADD_REPLICA((n5,s5):5): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3 (n4,s4):4 (n5,s5):5] next=6
[08:42:53][Step 2/2] I181018 08:38:10.964709 55458 storage/store_snapshot.go:621 [raftsnapshot,s1,r1/1:/M{in-ax}] sending Raft snapshot a3b9787b at applied index 33
[08:42:53][Step 2/2] I181018 08:38:10.966285 55458 storage/store_snapshot.go:664 [raftsnapshot,s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):2: kv pairs: 69, log entries: 4, rate-limit: 8.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:38:10.967273 55653 storage/replica_raftstorage.go:804 [s2,r1/2:/M{in-ax}] applying Raft snapshot at index 33 (id=a3b9787b, encoded size=9370, 1 rocksdb batches, 4 log entries)
[08:42:53][Step 2/2] I181018 08:38:10.970347 55653 storage/replica_raftstorage.go:810 [s2,r1/2:/M{in-ax}] applied Raft snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:10.994953 55434 storage/store_snapshot.go:621 [raftsnapshot,s1,r1/1:/M{in-ax}] sending Raft snapshot 6c92fadf at applied index 34
[08:42:53][Step 2/2] I181018 08:38:10.997293 55434 storage/store_snapshot.go:664 [raftsnapshot,s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):3: kv pairs: 70, log entries: 5, rate-limit: 8.0 MiB/sec, 4ms
[08:42:53][Step 2/2] I181018 08:38:10.997921 55436 storage/replica_raftstorage.go:804 [s3,r1/3:/M{in-ax}] applying Raft snapshot at index 34 (id=6c92fadf, encoded size=9540, 1 rocksdb batches, 5 log entries)
[08:42:53][Step 2/2] I181018 08:38:11.000529 55436 storage/replica_raftstorage.go:810 [s3,r1/3:/M{in-ax}] applied Raft snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:11.087808 55017 gossip/gossip.go:1510 [n2] node has connected to cluster via gossip
[08:42:53][Step 2/2] --- PASS: TestConcurrentRaftSnapshots (0.85s)
[08:42:53][Step 2/2] === RUN TestReplicateAfterRemoveAndSplit
[08:42:53][Step 2/2] I181018 08:38:11.165999 55629 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:35575" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] W181018 08:38:11.212101 55629 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:11.213231 55629 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:39961" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:11.214726 55782 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:35575
[08:42:53][Step 2/2] W181018 08:38:11.259812 55629 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:53][Step 2/2] I181018 08:38:11.260768 55629 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:33903" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:53][Step 2/2] I181018 08:38:11.266508 55969 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:35575
[08:42:53][Step 2/2] I181018 08:38:11.340646 55629 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 48a62ecb at applied index 16
[08:42:53][Step 2/2] I181018 08:38:11.342157 55629 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:38:11.344001 55554 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=48a62ecb, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:53][Step 2/2] I181018 08:38:11.346389 55554 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:11.349071 55629 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:53][Step 2/2] I181018 08:38:11.356879 55629 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=8819ced8] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:53][Step 2/2] I181018 08:38:11.367795 55629 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 1bdfd3e5 at applied index 18
[08:42:53][Step 2/2] I181018 08:38:11.369245 55629 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 3ms
[08:42:53][Step 2/2] I181018 08:38:11.370905 56021 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=1bdfd3e5, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:53][Step 2/2] I181018 08:38:11.374674 56021 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:53][Step 2/2] I181018 08:38:11.377752 55629 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:53][Step 2/2] I181018 08:38:11.402139 55629 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=de2e2ca2] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:53][Step 2/2] I181018 08:38:11.696936 55629 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:53][Step 2/2] I181018 08:38:11.706815 55629 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=cc135149] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2] next=4
[08:42:53][Step 2/2] I181018 08:38:11.723847 55629 storage/replica_command.go:300 initiating a split of this range at key "m" [r2]
[08:42:53][Step 2/2] I181018 08:38:11.745443 55629 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:53][Step 2/2] I181018 08:38:11.767269 55717 storage/replica_proposal.go:212 [s1,r2/1:{m-/Max}] new range lease repl=(n1,s1):1 seq=2 start=0.000000000,0 epo=1 pro=1.800000125,16 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
[08:42:53][Step 2/2] E181018 08:38:11.769486 55979 storage/store.go:3638 [s3,r1/3:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] E181018 08:38:11.769676 56025 storage/store.go:3638 [s3,r1/3:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:53][Step 2/2] W181018 08:38:11.771361 56098 storage/replica.go:6500 [s3,r1/3:/M{in-ax}] could not acquire lease for range gossip: r1 was not found on s3
[08:42:53][Step 2/2] W181018 08:38:11.771539 56098 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip node liveness: r1 was not found on s3
[08:42:53][Step 2/2] W181018 08:38:11.772354 56097 storage/replica.go:6500 [s3,r1/3:/M{in-ax}] could not acquire lease for range gossip: r1 was not found on s3
[08:42:53][Step 2/2] W181018 08:38:11.772541 56097 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip system config: r1 was not found on s3
[08:42:53][Step 2/2] W181018 08:38:11.773807 56096 storage/replica.go:6500 [s3,r1/3:/M{in-ax}] could not acquire lease for range gossip: r1 was not found on s3
[08:42:53][Step 2/2] W181018 08:38:11.773996 56096 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip first range descriptor: r1 was not found on s3
[08:42:53][Step 2/2] E181018 08:38:11.811180 56031 storage/store_snapshot.go:450 [s3] [n3,s3,r1/3:/M{in-ax}]: unable to add replica to GC queue: queue disabled
[08:42:53][Step 2/2] I181018 08:38:11.819705 56111 storage/store.go:2580 [replicaGC,s3,r1/3:/M{in-ax}] removing replica r1/3
[08:42:54][Step 2/2] W181018 08:38:11.819854 56097 storage/replica.go:6500 [s3,r1/3:/M{in-ax}] could not acquire lease for range gossip: r1 was not found on s3
[08:42:54][Step 2/2] W181018 08:38:11.820500 56097 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip system config: r1 was not found on s3
[08:42:54][Step 2/2] I181018 08:38:11.821410 56111 storage/replica.go:863 [replicaGC,s3,r1/3:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:11.822610 55629 storage/store_snapshot.go:621 sending preemptive snapshot a8b0ba96 at applied index 14
[08:42:54][Step 2/2] I181018 08:38:11.823816 55629 storage/store_snapshot.go:664 streamed snapshot to (n3,s3):?: kv pairs: 42, log entries: 4, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:11.825054 56132 storage/replica_raftstorage.go:804 [s3,r2/?:{-}] applying preemptive snapshot at index 14 (id=a8b0ba96, encoded size=7919, 1 rocksdb batches, 4 log entries)
[08:42:54][Step 2/2] I181018 08:38:11.827230 56132 storage/replica_raftstorage.go:810 [s3,r2/?:{m-/Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:11.829827 55629 storage/replica_command.go:816 change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r2:{m-/Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:11.846969 55629 storage/replica.go:3884 [txn=a0581af7,s1,r2/1:{m-/Max}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:11.854036 56120 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:54][Step 2/2] W181018 08:38:11.854873 56009 storage/intent_resolver.go:745 [s1] failed to cleanup transaction intents: failed to resolve intents: node unavailable; try another peer
[08:42:54][Step 2/2] --- PASS: TestReplicateAfterRemoveAndSplit (0.79s)
[08:42:54][Step 2/2] === RUN TestRefreshPendingCommands
[08:42:54][Step 2/2] === RUN TestRefreshPendingCommands/reasonSnapshotApplied
[08:42:54][Step 2/2] W181018 08:38:11.936521 56233 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip system config: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:11.938054 56234 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip node liveness: periodic gossip is disabled
[08:42:54][Step 2/2] I181018 08:38:11.945327 56147 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41677" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:12.004187 56147 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:12.005115 56147 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:38571" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:12.006520 56250 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:41677
[08:42:54][Step 2/2] W181018 08:38:12.046184 56147 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:12.047101 56147 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:38045" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:12.049826 56114 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:41677
[08:42:54][Step 2/2] I181018 08:38:12.071749 56147 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 36daef57 at applied index 16
[08:42:54][Step 2/2] I181018 08:38:12.073040 56147 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:12.074363 56478 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=36daef57, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:12.076766 56478 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:12.079241 56147 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:12.086037 56147 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b8c65ffa] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:12.096169 56147 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 9f4a83fa at applied index 18
[08:42:54][Step 2/2] I181018 08:38:12.097919 56147 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:12.099039 56392 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=9f4a83fa, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:12.101944 56392 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:12.104507 56147 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:12.124804 56147 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=c52732de] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:12.328682 56147 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:54][Step 2/2] I181018 08:38:12.331520 56147 storage/client_test.go:1252 test clock advanced to: 3.600000127,0
[08:42:54][Step 2/2] W181018 08:38:12.331630 56609 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip system config: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:12.332220 56610 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip node liveness: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:12.351111 56708 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip system config: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:12.351190 56709 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip node liveness: periodic gossip is disabled
[08:42:54][Step 2/2] I181018 08:38:12.513923 56504 storage/store_snapshot.go:621 [raftsnapshot,s2,r1/2:/M{in-ax}] sending Raft snapshot c63e459a at applied index 24
[08:42:54][Step 2/2] I181018 08:38:12.515820 56504 storage/store_snapshot.go:664 [raftsnapshot,s2,r1/2:/M{in-ax}] streamed snapshot to (n3,s3):3: kv pairs: 57, log entries: 2, rate-limit: 8.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:12.516325 56542 storage/replica_raftstorage.go:804 [s3,r1/3:/M{in-ax}] applying Raft snapshot at index 24 (id=c63e459a, encoded size=8277, 1 rocksdb batches, 2 log entries)
[08:42:54][Step 2/2] I181018 08:38:12.518130 56542 storage/replica_raftstorage.go:810 [s3,r1/3:/M{in-ax}] applied Raft snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] === RUN TestRefreshPendingCommands/reasonTicks
[08:42:54][Step 2/2] W181018 08:38:12.714356 56809 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip system config: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:12.714637 56810 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip node liveness: periodic gossip is disabled
[08:42:54][Step 2/2] I181018 08:38:12.724594 56543 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:35383" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:12.781952 56543 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:12.782934 56543 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45049" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:12.786053 56141 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:35383
[08:42:54][Step 2/2] W181018 08:38:12.889841 56543 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:12.890879 56543 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:36077" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:12.894596 56959 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:35383
[08:42:54][Step 2/2] I181018 08:38:12.961676 56543 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 1cd568ce at applied index 16
[08:42:54][Step 2/2] I181018 08:38:12.962823 56543 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:38:12.964191 57078 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=1cd568ce, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:12.966602 57078 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:12.968614 56543 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:12.975221 56543 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=51aacc14] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:12.985882 56543 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot cbe4aeb8 at applied index 18
[08:42:54][Step 2/2] I181018 08:38:12.987567 56543 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:12.989361 56522 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=cbe4aeb8, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:12.992501 56522 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:12.996498 56543 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:13.023124 56543 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=14e45578] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] W181018 08:38:13.358174 57185 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip system config: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:13.358496 57186 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip node liveness: periodic gossip is disabled
[08:42:54][Step 2/2] I181018 08:38:13.360940 56543 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:54][Step 2/2] I181018 08:38:13.361858 56543 storage/client_test.go:1252 test clock advanced to: 3.600000127,0
[08:42:54][Step 2/2] W181018 08:38:13.385082 57272 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip node liveness: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:13.385410 57271 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip system config: periodic gossip is disabled
[08:42:54][Step 2/2] I181018 08:38:13.780304 56941 storage/store_snapshot.go:621 [raftsnapshot,s1,r1/1:/M{in-ax}] sending Raft snapshot 228f14a3 at applied index 26
[08:42:54][Step 2/2] I181018 08:38:13.782042 56941 storage/store_snapshot.go:664 [raftsnapshot,s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):3: kv pairs: 59, log entries: 2, rate-limit: 8.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:13.782785 57281 storage/replica_raftstorage.go:804 [s3,r1/3:/M{in-ax}] applying Raft snapshot at index 26 (id=228f14a3, encoded size=8375, 1 rocksdb batches, 2 log entries)
[08:42:54][Step 2/2] I181018 08:38:13.785154 57281 storage/replica_raftstorage.go:810 [s3,r1/3:/M{in-ax}] applied Raft snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:13.787221 57284 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:36077
[08:42:54][Step 2/2] I181018 08:38:13.837407 57073 storage/client_test.go:1252 [hb,txn=654cefb9,range-lookup=/Meta2/System/NodeLiveness/3/NULL] test clock advanced to: 5.400000129,0
[08:42:54][Step 2/2] W181018 08:38:13.840903 57279 storage/node_liveness.go:454 [hb] failed node liveness heartbeat: aborted in distSender: context deadline exceeded
[08:42:54][Step 2/2] --- PASS: TestRefreshPendingCommands (2.08s)
[08:42:54][Step 2/2] --- PASS: TestRefreshPendingCommands/reasonSnapshotApplied (0.78s)
[08:42:54][Step 2/2] --- PASS: TestRefreshPendingCommands/reasonTicks (1.30s)
[08:42:54][Step 2/2] === RUN TestLogGrowthWhenRefreshingPendingCommands
[08:42:54][Step 2/2] W181018 08:38:14.019262 57364 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip system config: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:14.019262 57365 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip node liveness: periodic gossip is disabled
[08:42:54][Step 2/2] I181018 08:38:14.028802 57083 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:46225" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:14.082795 57083 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:14.083700 57083 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:33031" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:14.085375 57386 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:46225
[08:42:54][Step 2/2] W181018 08:38:14.128620 57083 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:14.129387 57083 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:39787" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:14.131916 57617 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:46225
[08:42:54][Step 2/2] W181018 08:38:14.177399 57083 gossip/gossip.go:1496 [n4] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:14.178217 57083 gossip/gossip.go:393 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:38739" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:14.180163 57749 gossip/client.go:129 [n4] started gossip client to 127.0.0.1:46225
[08:42:54][Step 2/2] W181018 08:38:14.226463 57083 gossip/gossip.go:1496 [n5] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:14.227636 57083 gossip/gossip.go:393 [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:38175" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:14.229021 57857 gossip/client.go:129 [n5] started gossip client to 127.0.0.1:46225
[08:42:54][Step 2/2] I181018 08:38:14.229904 57618 gossip/server.go:282 [n1] refusing gossip from n5 (max 3 conns); forwarding to n4 ({tcp 127.0.0.1:38739})
[08:42:54][Step 2/2] I181018 08:38:14.236049 57857 gossip/client.go:134 [n5] closing client to n1 (127.0.0.1:46225): received forward from n1 to 4 (127.0.0.1:38739)
[08:42:54][Step 2/2] I181018 08:38:14.236863 57755 gossip/gossip.go:1510 [n5] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:38:14.238321 57864 gossip/client.go:129 [n5] started gossip client to 127.0.0.1:38739
[08:42:54][Step 2/2] I181018 08:38:14.267112 57083 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 2f0aed2e at applied index 18
[08:42:54][Step 2/2] I181018 08:38:14.269865 57083 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 51, log entries: 8, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:14.277337 57761 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 18 (id=2f0aed2e, encoded size=8722, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:14.281420 57761 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:14.285592 57083 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:14.299787 57083 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=5e5054a0] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:14.321782 57083 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 92aaae8a at applied index 20
[08:42:54][Step 2/2] I181018 08:38:14.323696 57083 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 54, log entries: 10, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:14.325811 57908 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 20 (id=92aaae8a, encoded size=9664, 1 rocksdb batches, 10 log entries)
[08:42:54][Step 2/2] I181018 08:38:14.330380 57908 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:14.334339 57083 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:14.350394 57083 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=77bdf953] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:14.367385 57083 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot d1917bc7 at applied index 22
[08:42:54][Step 2/2] I181018 08:38:14.368986 57083 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n4,s4):?: kv pairs: 57, log entries: 12, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:14.371122 57777 storage/replica_raftstorage.go:804 [s4,r1/?:{-}] applying preemptive snapshot at index 22 (id=d1917bc7, encoded size=10667, 1 rocksdb batches, 12 log entries)
[08:42:54][Step 2/2] I181018 08:38:14.375923 57777 storage/replica_raftstorage.go:810 [s4,r1/?:/M{in-ax}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:14.379734 57083 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n4,s4):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:14.389833 57083 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=80a5dd1b] proposing ADD_REPLICA((n4,s4):4): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3 (n4,s4):4] next=5
[08:42:54][Step 2/2] I181018 08:38:14.403528 57083 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot df1e668f at applied index 24
[08:42:54][Step 2/2] I181018 08:38:14.404877 57083 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n5,s5):?: kv pairs: 60, log entries: 14, rate-limit: 2.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:38:14.406603 57939 storage/replica_raftstorage.go:804 [s5,r1/?:{-}] applying preemptive snapshot at index 24 (id=df1e668f, encoded size=11738, 1 rocksdb batches, 14 log entries)
[08:42:54][Step 2/2] I181018 08:38:14.410858 57939 storage/replica_raftstorage.go:810 [s5,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:14.414117 57083 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n5,s5):5): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, (n4,s4):4, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:38:14.429686 57083 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=fe834d2e] proposing ADD_REPLICA((n5,s5):5): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3 (n4,s4):4 (n5,s5):5] next=6
[08:42:54][Step 2/2] I181018 08:38:14.431891 57348 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=fe834d2e] proposing ADD_REPLICA((n5,s5):5): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3 (n4,s4):4 (n5,s5):5] next=6
[08:42:54][Step 2/2] I181018 08:38:14.437875 57348 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=fe834d2e] proposing ADD_REPLICA((n5,s5):5): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3 (n4,s4):4 (n5,s5):5] next=6
[08:42:54][Step 2/2] === RUN TestLogGrowthWhenRefreshingPendingCommands/proposeOnFollower=false
[08:42:54][Step 2/2] W181018 08:38:15.051648 58004 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip node liveness: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:15.052133 58003 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip system config: periodic gossip is disabled
[08:42:54][Step 2/2] === RUN TestLogGrowthWhenRefreshingPendingCommands/proposeOnFollower=true
[08:42:54][Step 2/2] W181018 08:38:15.140417 58091 storage/store.go:1490 [s4,r1/4:/M{in-ax}] could not gossip node liveness: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:15.143237 58090 storage/store.go:1490 [s4,r1/4:/M{in-ax}] could not gossip system config: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:15.188459 58179 storage/store.go:1490 [s5,r1/5:/M{in-ax}] could not gossip system config: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:15.188548 58180 storage/store.go:1490 [s5,r1/5:/M{in-ax}] could not gossip node liveness: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:15.288841 57883 storage/raft_transport.go:282 unable to accept Raft message from (n1,s1):1: no handler registered for (n5,s5):5
[08:42:54][Step 2/2] W181018 08:38:15.290198 57881 storage/store.go:3662 [s1,r1/1:/M{in-ax}] raft error: node 5 claims to not contain store 5 for replica (n5,s5):5: store 5 was not found
[08:42:54][Step 2/2] W181018 08:38:15.290472 57879 storage/raft_transport.go:584 while processing outgoing Raft queue to node 5: store 5 was not found:
[08:42:54][Step 2/2] W181018 08:38:15.349730 57626 storage/raft_transport.go:282 unable to accept Raft message from (n2,s2):2: no handler registered for (n1,s1):1
[08:42:54][Step 2/2] W181018 08:38:15.351891 57911 storage/store.go:3662 [s2,r1/2:/M{in-ax}] raft error: node 1 claims to not contain store 1 for replica (n1,s1):1: store 1 was not found
[08:42:54][Step 2/2] W181018 08:38:15.352408 57909 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: store 1 was not found:
[08:42:54][Step 2/2] I181018 08:38:15.362280 57738 gossip/gossip.go:1510 [n4] node has connected to cluster via gossip
[08:42:54][Step 2/2] --- FAIL: TestLogGrowthWhenRefreshingPendingCommands (1.41s)
[08:42:54][Step 2/2] --- PASS: TestLogGrowthWhenRefreshingPendingCommands/proposeOnFollower=false (0.64s)
[08:42:54][Step 2/2] client_raft_test.go:1265: raft log size grew to 41 KiB
[08:42:54][Step 2/2] client_raft_test.go:1265: raft log size grew to 57 KiB
[08:42:54][Step 2/2] client_raft_test.go:1265: raft log size grew to 57 KiB
[08:42:54][Step 2/2] client_raft_test.go:1265: raft log size grew to 57 KiB
[08:42:54][Step 2/2] client_raft_test.go:1265: raft log size grew to 57 KiB
[08:42:54][Step 2/2] client_raft_test.go:1265: raft log size grew to 57 KiB
[08:42:54][Step 2/2] client_raft_test.go:1265: raft log size grew to 57 KiB
[08:42:54][Step 2/2] client_raft_test.go:1265: raft log size grew to 57 KiB
[08:42:54][Step 2/2] client_raft_test.go:1265: raft log size grew to 57 KiB
[08:42:54][Step 2/2] --- FAIL: TestLogGrowthWhenRefreshingPendingCommands/proposeOnFollower=true (0.22s)
[08:42:54][Step 2/2] client_raft_test.go:1263: raft log size grew to -23 KiB
[08:42:54][Step 2/2] === RUN TestStoreRangeUpReplicate
[08:42:54][Step 2/2] I181018 08:38:15.437879 58095 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:33723" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:15.485297 58095 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:15.486316 58095 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:40531" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:15.487607 58435 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:33723
[08:42:54][Step 2/2] W181018 08:38:15.532623 58095 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:15.533650 58095 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:38179" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:15.536944 58564 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:33723
[08:42:54][Step 2/2] I181018 08:38:15.648849 58095 storage/client_test.go:421 gossip network initialized
[08:42:54][Step 2/2] I181018 08:38:15.654224 58095 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot 6cbe00a1 at applied index 18
[08:42:54][Step 2/2] I181018 08:38:15.656428 58095 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 51, log entries: 8, rate-limit: 8.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:15.658183 58649 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=6cbe00a1, encoded size=9692, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:15.661427 58649 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:15.664956 58095 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:15.675725 58095 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=966e15ec] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
[08:42:54][Step 2/2] I181018 08:38:15.686675 58095 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot 2730a83e at applied index 20
[08:42:54][Step 2/2] I181018 08:38:15.688507 58095 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 54, log entries: 10, rate-limit: 8.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:15.689813 58557 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 20 (id=2730a83e, encoded size=10634, 1 rocksdb batches, 10 log entries)
[08:42:54][Step 2/2] I181018 08:38:15.693046 58557 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:15.696707 58095 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:15.714698 58095 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=315361de] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
[08:42:54][Step 2/2] --- PASS: TestStoreRangeUpReplicate (0.52s)
[08:42:54][Step 2/2] === RUN TestStoreRangeCorruptionChangeReplicas
[08:42:54][Step 2/2] I181018 08:38:15.943746 58670 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:44159" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:15.988792 58670 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:15.989936 58670 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:38219" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:15.992849 58800 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:44159
[08:42:54][Step 2/2] W181018 08:38:16.038268 58670 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:16.039498 58670 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:44865" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:16.040918 58559 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:44159
[08:42:54][Step 2/2] W181018 08:38:16.091200 58670 gossip/gossip.go:1496 [n4] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:16.092175 58670 gossip/gossip.go:393 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:40347" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:16.099110 59141 gossip/client.go:129 [n4] started gossip client to 127.0.0.1:44159
[08:42:54][Step 2/2] W181018 08:38:16.188479 58670 gossip/gossip.go:1496 [n5] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:16.189489 58670 gossip/gossip.go:393 [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:46553" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:16.193705 59253 gossip/client.go:129 [n5] started gossip client to 127.0.0.1:44159
[08:42:54][Step 2/2] I181018 08:38:16.197011 59240 gossip/server.go:282 [n1] refusing gossip from n5 (max 3 conns); forwarding to n4 ({tcp 127.0.0.1:40347})
[08:42:54][Step 2/2] I181018 08:38:16.202535 59253 gossip/client.go:134 [n5] closing client to n1 (127.0.0.1:44159): received forward from n1 to 4 (127.0.0.1:40347)
[08:42:54][Step 2/2] I181018 08:38:16.203705 59251 gossip/gossip.go:1510 [n5] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:38:16.205404 59145 gossip/client.go:129 [n5] started gossip client to 127.0.0.1:40347
[08:42:54][Step 2/2] W181018 08:38:16.252701 58670 gossip/gossip.go:1496 [n6] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:16.253533 58670 gossip/gossip.go:393 [n6] NodeDescriptor set to node_id:6 address:<network_field:"tcp" address_field:"127.0.0.1:41241" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:16.254874 59007 gossip/client.go:129 [n6] started gossip client to 127.0.0.1:44159
[08:42:54][Step 2/2] I181018 08:38:16.255751 59273 gossip/server.go:282 [n1] refusing gossip from n6 (max 3 conns); forwarding to n2 ({tcp 127.0.0.1:38219})
[08:42:54][Step 2/2] I181018 08:38:16.271144 59007 gossip/client.go:134 [n6] closing client to n1 (127.0.0.1:44159): received forward from n1 to 2 (127.0.0.1:38219)
[08:42:54][Step 2/2] I181018 08:38:16.273228 59366 gossip/gossip.go:1510 [n6] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:38:16.274957 59274 gossip/client.go:129 [n6] started gossip client to 127.0.0.1:38219
[08:42:54][Step 2/2] W181018 08:38:16.353354 58670 gossip/gossip.go:1496 [n7] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:16.354253 58670 gossip/gossip.go:393 [n7] NodeDescriptor set to node_id:7 address:<network_field:"tcp" address_field:"127.0.0.1:44843" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:16.356073 59510 gossip/client.go:129 [n7] started gossip client to 127.0.0.1:44159
[08:42:54][Step 2/2] I181018 08:38:16.358335 59358 gossip/server.go:282 [n1] refusing gossip from n7 (max 3 conns); forwarding to n4 ({tcp 127.0.0.1:40347})
[08:42:54][Step 2/2] I181018 08:38:16.362052 59510 gossip/client.go:134 [n7] closing client to n1 (127.0.0.1:44159): received forward from n1 to 4 (127.0.0.1:40347)
[08:42:54][Step 2/2] I181018 08:38:16.362857 59495 gossip/gossip.go:1510 [n7] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:38:16.364174 59500 gossip/client.go:129 [n7] started gossip client to 127.0.0.1:40347
[08:42:54][Step 2/2] W181018 08:38:16.409378 58670 gossip/gossip.go:1496 [n8] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:16.410356 58670 gossip/gossip.go:393 [n8] NodeDescriptor set to node_id:8 address:<network_field:"tcp" address_field:"127.0.0.1:35029" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:16.413972 59638 gossip/client.go:129 [n8] started gossip client to 127.0.0.1:44159
[08:42:54][Step 2/2] I181018 08:38:16.418183 59383 gossip/server.go:282 [n1] refusing gossip from n8 (max 3 conns); forwarding to n3 ({tcp 127.0.0.1:44865})
[08:42:54][Step 2/2] I181018 08:38:16.421291 59638 gossip/client.go:134 [n8] closing client to n1 (127.0.0.1:44159): received forward from n1 to 3 (127.0.0.1:44865)
[08:42:54][Step 2/2] I181018 08:38:16.422614 59636 gossip/gossip.go:1510 [n8] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:38:16.423680 59486 gossip/client.go:129 [n8] started gossip client to 127.0.0.1:44865
[08:42:54][Step 2/2] I181018 08:38:16.517266 58670 storage/client_test.go:421 gossip network initialized
[08:42:54][Step 2/2] I181018 08:38:16.523961 58670 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot 8b2e2467 at applied index 21
[08:42:54][Step 2/2] I181018 08:38:16.525370 58670 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n7,s7):?: kv pairs: 54, log entries: 11, rate-limit: 8.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:16.527129 59646 storage/replica_raftstorage.go:804 [s7,r1/?:{-}] applying preemptive snapshot at index 21 (id=8b2e2467, encoded size=9262, 1 rocksdb batches, 11 log entries)
[08:42:54][Step 2/2] I181018 08:38:16.531458 59646 storage/replica_raftstorage.go:810 [s7,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:16.535560 58670 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n7,s7):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:16.544376 58670 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=cee5ccb1] proposing ADD_REPLICA((n7,s7):2): updated=[(n1,s1):1 (n7,s7):2] next=3
[08:42:54][Step 2/2] I181018 08:38:16.557065 58670 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot 34b09a09 at applied index 23
[08:42:54][Step 2/2] I181018 08:38:16.558783 58670 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 57, log entries: 13, rate-limit: 8.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:16.562705 59651 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 23 (id=34b09a09, encoded size=10204, 1 rocksdb batches, 13 log entries)
[08:42:54][Step 2/2] I181018 08:38:16.567792 59651 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=4ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:16.572395 58670 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n7,s7):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:16.589938 58670 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=f1ff2ab2] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n7,s7):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:16.603621 58670 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot c6672fea at applied index 25
[08:42:54][Step 2/2] I181018 08:38:16.605231 58670 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n6,s6):?: kv pairs: 60, log entries: 15, rate-limit: 8.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:16.606514 59650 storage/replica_raftstorage.go:804 [s6,r1/?:{-}] applying preemptive snapshot at index 25 (id=c6672fea, encoded size=11211, 1 rocksdb batches, 15 log entries)
[08:42:54][Step 2/2] I181018 08:38:16.610770 59650 storage/replica_raftstorage.go:810 [s6,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:16.614945 58670 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n6,s6):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n7,s7):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:16.626657 58670 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=62b0bb37] proposing ADD_REPLICA((n6,s6):4): updated=[(n1,s1):1 (n7,s7):2 (n3,s3):3 (n6,s6):4] next=5
[08:42:54][Step 2/2] I181018 08:38:16.646461 58670 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot cafbf1f8 at applied index 27
[08:42:54][Step 2/2] I181018 08:38:16.648272 58670 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n5,s5):?: kv pairs: 63, log entries: 17, rate-limit: 8.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:16.650220 59538 storage/replica_raftstorage.go:804 [s5,r1/?:{-}] applying preemptive snapshot at index 27 (id=cafbf1f8, encoded size=12282, 1 rocksdb batches, 17 log entries)
[08:42:54][Step 2/2] I181018 08:38:16.659481 59538 storage/replica_raftstorage.go:810 [s5,r1/?:/M{in-ax}] applied preemptive snapshot in 9ms [clear=0ms batch=0ms entries=8ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:16.664722 58670 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n5,s5):5): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n7,s7):2, (n3,s3):3, (n6,s6):4, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:38:16.679179 58670 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=ce43e8f0] proposing ADD_REPLICA((n5,s5):5): updated=[(n1,s1):1 (n7,s7):2 (n3,s3):3 (n6,s6):4 (n5,s5):5] next=6
[08:42:54][Step 2/2] E181018 08:38:16.698839 59416 storage/replica.go:6712 [s7,r1/2:/M{in-ax}] stalling replica due to: boom
[08:42:54][Step 2/2] I181018 08:38:16.711966 59520 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot a7ceef7e at applied index 30
[08:42:54][Step 2/2] I181018 08:38:16.713917 59520 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n4,s4):?: kv pairs: 67, log entries: 20, rate-limit: 8.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:16.715669 59280 storage/replica_raftstorage.go:804 [s4,r1/?:{-}] applying preemptive snapshot at index 30 (id=a7ceef7e, encoded size=13606, 1 rocksdb batches, 20 log entries)
[08:42:54][Step 2/2] I181018 08:38:16.720941 59280 storage/replica_raftstorage.go:810 [s4,r1/?:/M{in-ax}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=4ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:16.724970 59520 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n4,s4):6): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n7,s7):2, (n3,s3):3, (n6,s6):4, (n5,s5):5, next=6, gen=0]
[08:42:54][Step 2/2] W181018 08:38:16.735530 59278 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n7,s7):2: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:16.740050 59278 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n7,s7):2: replica corruption (processed=true): boom
[08:42:54][Step 2/2] I181018 08:38:16.742663 59520 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=263970b1] proposing ADD_REPLICA((n4,s4):6): updated=[(n1,s1):1 (n7,s7):2 (n3,s3):3 (n6,s6):4 (n5,s5):5 (n4,s4):6] next=7
[08:42:54][Step 2/2] W181018 08:38:16.746075 59278 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n7,s7):2: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:16.749067 59278 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n7,s7):2: replica corruption (processed=true): boom
[08:42:54][Step 2/2] I181018 08:38:16.756869 58670 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n7,s7):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n7,s7):2, (n3,s3):3, (n6,s6):4, (n5,s5):5, (n4,s4):6, next=7, gen=0]
[08:42:54][Step 2/2] W181018 08:38:16.766973 59278 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n7,s7):2: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:16.769992 59278 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n7,s7):2: replica corruption (processed=true): boom
[08:42:54][Step 2/2] I181018 08:38:16.773235 58670 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=9a1b88fd] proposing REMOVE_REPLICA((n7,s7):2): updated=[(n1,s1):1 (n4,s4):6 (n3,s3):3 (n6,s6):4 (n5,s5):5] next=7
[08:42:54][Step 2/2] W181018 08:38:16.776041 59278 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n7,s7):2: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:16.780989 59278 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n7,s7):2: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:16.789391 59278 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n7,s7):2: replica corruption (processed=true): boom
[08:42:54][Step 2/2] E181018 08:38:16.794840 59172 storage/replica.go:6712 [s5,r1/5:/M{in-ax}] stalling replica due to: boom
[08:42:54][Step 2/2] I181018 08:38:16.805436 58670 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot 3eb88308 at applied index 35
[08:42:54][Step 2/2] I181018 08:38:16.806952 58670 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 74, log entries: 25, rate-limit: 8.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:38:16.808606 59687 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 35 (id=3eb88308, encoded size=16126, 1 rocksdb batches, 25 log entries)
[08:42:54][Step 2/2] I181018 08:38:16.815269 59687 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=5ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:16.820118 58670 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):7): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n4,s4):6, (n3,s3):3, (n6,s6):4, (n5,s5):5, next=7, gen=0]
[08:42:54][Step 2/2] W181018 08:38:16.826564 59673 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n5,s5):5: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:16.830627 59673 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n5,s5):5: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:16.832939 59673 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n5,s5):5: replica corruption (processed=true): boom
[08:42:54][Step 2/2] I181018 08:38:16.834250 58670 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=411e3871] proposing ADD_REPLICA((n2,s2):7): updated=[(n1,s1):1 (n4,s4):6 (n3,s3):3 (n6,s6):4 (n5,s5):5 (n2,s2):7] next=8
[08:42:54][Step 2/2] W181018 08:38:16.837879 59673 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n5,s5):5: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:16.841482 59673 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n5,s5):5: replica corruption (processed=true): boom
[08:42:54][Step 2/2] I181018 08:38:16.853949 58670 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n5,s5):5): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n4,s4):6, (n3,s3):3, (n6,s6):4, (n5,s5):5, (n2,s2):7, next=8, gen=0]
[08:42:54][Step 2/2] W181018 08:38:16.861585 59673 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n5,s5):5: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:16.865769 59673 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n5,s5):5: replica corruption (processed=true): boom
[08:42:54][Step 2/2] I181018 08:38:16.869994 58670 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=61b352f6] proposing REMOVE_REPLICA((n5,s5):5): updated=[(n1,s1):1 (n4,s4):6 (n3,s3):3 (n6,s6):4 (n2,s2):7] next=8
[08:42:54][Step 2/2] W181018 08:38:16.877408 59673 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n5,s5):5: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:16.889223 59673 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n5,s5):5: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:16.893243 59673 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n5,s5):5: replica corruption (processed=true): boom
[08:42:54][Step 2/2] E181018 08:38:16.920956 59060 storage/replica.go:6712 [s4,r1/6:/M{in-ax}] stalling replica due to: boom
[08:42:54][Step 2/2] I181018 08:38:16.934306 58670 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot 03d9a411 at applied index 40
[08:42:54][Step 2/2] I181018 08:38:16.935955 58670 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n8,s8):?: kv pairs: 81, log entries: 30, rate-limit: 8.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:16.937284 59658 storage/replica_raftstorage.go:804 [s8,r1/?:{-}] applying preemptive snapshot at index 40 (id=03d9a411, encoded size=18646, 1 rocksdb batches, 30 log entries)
[08:42:54][Step 2/2] I181018 08:38:16.945519 59658 storage/replica_raftstorage.go:810 [s8,r1/?:/M{in-ax}] applied preemptive snapshot in 8ms [clear=0ms batch=0ms entries=6ms commit=0ms]
[08:42:54][Step 2/2] W181018 08:38:16.946285 59699 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n4,s4):6: replica corruption (processed=true): boom
[08:42:54][Step 2/2] I181018 08:38:16.950668 58670 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n8,s8):8): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n4,s4):6, (n3,s3):3, (n6,s6):4, (n2,s2):7, next=8, gen=0]
[08:42:54][Step 2/2] W181018 08:38:16.958902 59699 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n4,s4):6: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:16.962753 59699 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n4,s4):6: replica corruption (processed=true): boom
[08:42:54][Step 2/2] I181018 08:38:16.966061 58670 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=5abd4d50] proposing ADD_REPLICA((n8,s8):8): updated=[(n1,s1):1 (n4,s4):6 (n3,s3):3 (n6,s6):4 (n2,s2):7 (n8,s8):8] next=9
[08:42:54][Step 2/2] W181018 08:38:16.969951 59699 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n4,s4):6: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:16.974467 59699 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n4,s4):6: replica corruption (processed=true): boom
[08:42:54][Step 2/2] I181018 08:38:16.988497 58670 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n4,s4):6): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n4,s4):6, (n3,s3):3, (n6,s6):4, (n2,s2):7, (n8,s8):8, next=9, gen=0]
[08:42:54][Step 2/2] W181018 08:38:16.995318 59699 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n4,s4):6: replica corruption (processed=true): boom
[08:42:54][Step 2/2] I181018 08:38:17.002468 58670 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=37a26185] proposing REMOVE_REPLICA((n4,s4):6): updated=[(n1,s1):1 (n8,s8):8 (n3,s3):3 (n6,s6):4 (n2,s2):7] next=9
[08:42:54][Step 2/2] W181018 08:38:17.006639 59699 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n4,s4):6: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:17.007044 59699 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n4,s4):6: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:17.016368 59699 storage/store.go:3666 [s1,r1/1:/M{in-ax}] got error from r1, replica (n4,s4):6: replica corruption (processed=true): boom
[08:42:54][Step 2/2] W181018 08:38:17.097661 59656 storage/raft_transport.go:584 while processing outgoing Raft queue to node 5: EOF:
[08:42:54][Step 2/2] W181018 08:38:17.099160 59731 storage/raft_transport.go:584 while processing outgoing Raft queue to node 8: EOF:
[08:42:54][Step 2/2] W181018 08:38:17.100118 59281 storage/raft_transport.go:584 while processing outgoing Raft queue to node 2: EOF:
[08:42:54][Step 2/2] W181018 08:38:17.100868 59653 storage/raft_transport.go:584 while processing outgoing Raft queue to node 6: rpc error: code = Unavailable desc = transport is closing:
[08:42:54][Step 2/2] W181018 08:38:17.102246 59636 gossip/gossip.go:1496 [n8] no incoming or outgoing connections
[08:42:54][Step 2/2] --- PASS: TestStoreRangeCorruptionChangeReplicas (1.22s)
[08:42:54][Step 2/2] === RUN TestUnreplicateFirstRange
[08:42:54][Step 2/2] I181018 08:38:17.181249 59692 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:46029" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:17.234042 59692 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:17.236313 59838 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:46029
[08:42:54][Step 2/2] I181018 08:38:17.243514 59692 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:39203" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:17.293400 59692 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:17.294766 59692 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:41885" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:17.305951 59727 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:46029
[08:42:54][Step 2/2] I181018 08:38:17.328155 59692 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 77ef3ee8 at applied index 16
[08:42:54][Step 2/2] I181018 08:38:17.329913 59692 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:17.331645 60075 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=77ef3ee8, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:17.335095 60075 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:17.337796 59692 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:17.346947 59692 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=e4cb64b9] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:17.533528 59692 storage/replica_command.go:816 [s2,r1/2:/M{in-ax}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:17.552542 59692 storage/replica.go:3884 [s2,r1/2:/M{in-ax},txn=942f24d8] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:17.570775 59692 storage/store_snapshot.go:621 [s2,r1/2:/M{in-ax}] sending preemptive snapshot 757eadb2 at applied index 24
[08:42:54][Step 2/2] I181018 08:38:17.572667 59692 storage/store_snapshot.go:664 [s2,r1/2:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 56, log entries: 14, rate-limit: 2.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:38:17.572775 60100 storage/store.go:3640 [s1,r1/1:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:17.575243 60107 storage/store.go:2580 [replicaGC,s1,r1/1:/M{in-ax}] removing replica r1/1
[08:42:54][Step 2/2] I181018 08:38:17.577203 60080 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 24 (id=757eadb2, encoded size=10839, 1 rocksdb batches, 14 log entries)
[08:42:54][Step 2/2] I181018 08:38:17.579141 60107 storage/replica.go:863 [replicaGC,s1,r1/1:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:17.586369 60080 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 9ms [clear=0ms batch=0ms entries=7ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:17.597552 59692 storage/replica_command.go:816 [s2,r1/2:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:17.606586 59692 storage/replica.go:3884 [s2,r1/2:/M{in-ax},txn=4235c384] proposing ADD_REPLICA((n3,s3):3): updated=[(n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:17.666521 59865 storage/node_liveness.go:790 [hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (context canceled)
[08:42:54][Step 2/2] W181018 08:38:17.667191 59865 storage/node_liveness.go:454 [hb] failed node liveness heartbeat: context canceled
[08:42:54][Step 2/2] I181018 08:38:17.668386 60123 internal/client/txn.go:637 async rollback failed: node unavailable; try another peer
[08:42:54][Step 2/2] --- PASS: TestUnreplicateFirstRange (0.59s)
[08:42:54][Step 2/2] === RUN TestChangeReplicasDescriptorInvariant
[08:42:54][Step 2/2] I181018 08:38:17.797789 60133 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41973" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:17.845423 60133 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:17.846512 60133 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:36945" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:17.848369 60349 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:41973
[08:42:54][Step 2/2] W181018 08:38:17.893922 60133 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:17.894797 60133 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:36619" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:17.896258 60086 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:41973
[08:42:54][Step 2/2] I181018 08:38:17.967935 60133 storage/store_snapshot.go:621 sending preemptive snapshot 70c78d0a at applied index 16
[08:42:54][Step 2/2] I181018 08:38:17.969503 60133 storage/store_snapshot.go:664 streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:17.971416 60363 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=70c78d0a, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:17.974594 60363 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:17.978058 60133 storage/replica_command.go:816 change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:17.987004 60133 storage/replica.go:3884 [txn=02fad98e,s1,r1/1:/M{in-ax}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:17.996368 60133 storage/store_snapshot.go:621 sending preemptive snapshot 9e9f7ca4 at applied index 18
[08:42:54][Step 2/2] I181018 08:38:17.997647 60133 storage/store_snapshot.go:664 streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:38:17.999264 59740 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=9e9f7ca4, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:18.002870 59740 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:18.006894 60133 storage/replica_command.go:816 change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:18.014958 60133 storage/store_snapshot.go:621 sending preemptive snapshot 7f483750 at applied index 18
[08:42:54][Step 2/2] I181018 08:38:18.016496 60133 storage/store_snapshot.go:664 streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:38:18.020526 60133 storage/replica_command.go:816 change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:18.036782 60133 storage/replica.go:3884 [txn=70f6ccce,s1,r1/1:/M{in-ax}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] W181018 08:38:18.074568 60369 storage/raft_transport.go:584 while processing outgoing Raft queue to node 2: rpc error: code = Unavailable desc = transport is closing:
[08:42:54][Step 2/2] --- PASS: TestChangeReplicasDescriptorInvariant (0.39s)
[08:42:54][Step 2/2] === RUN TestProgressWithDownNode
[08:42:54][Step 2/2] I181018 08:38:18.158265 60490 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:45475" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:18.205132 60490 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:18.205990 60490 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:34687" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:18.207766 60709 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:45475
[08:42:54][Step 2/2] W181018 08:38:18.271357 60490 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:18.272065 60490 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:34059" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:18.273710 60821 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:45475
[08:42:54][Step 2/2] I181018 08:38:18.294137 60490 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot b6cdb7d3 at applied index 16
[08:42:54][Step 2/2] I181018 08:38:18.295713 60490 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:18.297512 60602 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=b6cdb7d3, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:18.300078 60602 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:18.303229 60490 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:18.313388 60490 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=9f84652e] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:18.328798 60490 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 24efe63e at applied index 18
[08:42:54][Step 2/2] I181018 08:38:18.330432 60490 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:18.332215 60834 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=24efe63e, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:18.335278 60834 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:18.338071 60490 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:18.355026 60490 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=a7ad0d17] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] --- PASS: TestProgressWithDownNode (0.80s)
[08:42:54][Step 2/2] === RUN TestReplicateRestartAfterTruncationWithRemoveAndReAdd
[08:42:54][Step 2/2] I181018 08:38:18.962998 60733 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41555" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:19.007564 60733 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:19.008619 60733 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:39385" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:19.010284 61068 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:41555
[08:42:54][Step 2/2] W181018 08:38:19.060534 60733 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:19.061883 60733 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:37477" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:19.063513 61315 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:41555
[08:42:54][Step 2/2] I181018 08:38:19.092668 60733 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 06568da2 at applied index 16
[08:42:54][Step 2/2] I181018 08:38:19.094432 60733 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:19.096435 61303 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=06568da2, encoded size=8362, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:19.099388 61303 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:19.104204 60733 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:19.115002 60733 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=e652086d] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:19.128605 60733 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 12435972 at applied index 18
[08:42:54][Step 2/2] I181018 08:38:19.130085 60733 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:19.132409 61182 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=12435972, encoded size=9304, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:19.136525 61182 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:19.140341 60733 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:19.161501 60733 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6ce3a523] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:19.467241 60733 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:19.484005 60733 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6fe00278] proposing REMOVE_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:19.558843 60733 storage/store.go:2580 [replicaGC,s2,r1/2:/M{in-ax}] removing replica r1/2
[08:42:54][Step 2/2] I181018 08:38:19.562384 60733 storage/replica.go:863 [replicaGC,s2,r1/2:/M{in-ax}] removed 49 (44+5) keys in 3ms [clear=0ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:19.572301 60733 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot a79bd875 at applied index 27
[08:42:54][Step 2/2] I181018 08:38:19.573733 60733 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 61, log entries: 4, rate-limit: 2.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:38:19.574912 61350 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 27 (id=a79bd875, encoded size=8948, 1 rocksdb batches, 4 log entries)
[08:42:54][Step 2/2] I181018 08:38:19.577207 61350 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:19.581417 60733 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:19.593099 60733 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=1c54be46] proposing ADD_REPLICA((n2,s2):4): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):4] next=5
[08:42:54][Step 2/2] --- PASS: TestReplicateRestartAfterTruncationWithRemoveAndReAdd (1.03s)
[08:42:54][Step 2/2] === RUN TestReplicateRestartAfterTruncation
[08:42:54][Step 2/2] I181018 08:38:19.988312 61461 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:38147" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:20.050988 61461 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:20.052440 61461 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:43233" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:20.054177 61684 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:38147
[08:42:54][Step 2/2] W181018 08:38:20.108656 61461 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:20.109397 61461 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:34357" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:20.113372 61569 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:38147
[08:42:54][Step 2/2] I181018 08:38:20.189333 61461 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot ce529ef1 at applied index 16
[08:42:54][Step 2/2] I181018 08:38:20.191103 61461 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:20.192908 61788 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=ce529ef1, encoded size=8362, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:20.195503 61788 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:20.198350 61461 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:20.208350 61461 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=47a7f3f0] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:20.219769 61461 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot dcd45bfe at applied index 18
[08:42:54][Step 2/2] I181018 08:38:20.221207 61461 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:20.222636 61793 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=dcd45bfe, encoded size=9304, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:20.225979 61793 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:20.231015 61461 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:20.251323 61461 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=5d1bdbdb] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] W181018 08:38:20.655041 61804 storage/raft_transport.go:584 while processing outgoing Raft queue to node 3: rpc error: code = Unavailable desc = transport is closing:
[08:42:54][Step 2/2] --- PASS: TestReplicateRestartAfterTruncation (0.75s)
[08:42:54][Step 2/2] === RUN TestReplicateAddAndRemove
[08:42:54][Step 2/2] I181018 08:38:20.742657 61888 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:45881" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:20.821334 61888 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:20.822292 61888 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:43843" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:20.825710 62122 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:45881
[08:42:54][Step 2/2] W181018 08:38:20.929484 61888 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:20.932262 62291 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:45881
[08:42:54][Step 2/2] I181018 08:38:20.936235 61888 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:42659" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:21.003103 61888 gossip/gossip.go:1496 [n4] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:21.004227 61888 gossip/gossip.go:393 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:46247" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:21.009088 62399 gossip/client.go:129 [n4] started gossip client to 127.0.0.1:45881
[08:42:54][Step 2/2] I181018 08:38:21.087506 61888 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot a9c6154a at applied index 17
[08:42:54][Step 2/2] I181018 08:38:21.088979 61888 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n4,s4):?: kv pairs: 50, log entries: 7, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:21.091187 62009 storage/replica_raftstorage.go:804 [s4,r1/?:{-}] applying preemptive snapshot at index 17 (id=a9c6154a, encoded size=8514, 1 rocksdb batches, 7 log entries)
[08:42:54][Step 2/2] I181018 08:38:21.094768 62009 storage/replica_raftstorage.go:810 [s4,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:21.097652 61888 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n4,s4):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:21.109718 61888 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=43bd968d] proposing ADD_REPLICA((n4,s4):2): updated=[(n1,s1):1 (n4,s4):2] next=3
[08:42:54][Step 2/2] I181018 08:38:21.139891 61888 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 52fbfbe1 at applied index 19
[08:42:54][Step 2/2] I181018 08:38:21.141471 61888 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 53, log entries: 9, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:21.143281 62171 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 19 (id=52fbfbe1, encoded size=9456, 1 rocksdb batches, 9 log entries)
[08:42:54][Step 2/2] I181018 08:38:21.147388 62171 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:21.152027 61888 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n4,s4):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:21.177982 61888 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=c70b308c] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n4,s4):2 (n2,s2):3] next=4
[08:42:54][Step 2/2] I181018 08:38:21.256147 61888 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 179c8b98 at applied index 23
[08:42:54][Step 2/2] I181018 08:38:21.257613 61888 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 58, log entries: 13, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:21.259498 61836 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 23 (id=179c8b98, encoded size=10761, 1 rocksdb batches, 13 log entries)
[08:42:54][Step 2/2] I181018 08:38:21.267625 61836 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 8ms [clear=0ms batch=0ms entries=6ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:21.271671 61888 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n4,s4):2, (n2,s2):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:21.291988 61888 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=c6711426] proposing ADD_REPLICA((n3,s3):4): updated=[(n1,s1):1 (n4,s4):2 (n2,s2):3 (n3,s3):4] next=5
[08:42:54][Step 2/2] I181018 08:38:21.446593 61888 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n4,s4):2, (n2,s2):3, (n3,s3):4, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:38:21.464850 61888 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=912d1282] proposing REMOVE_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n4,s4):2 (n3,s3):4] next=5
[08:42:54][Step 2/2] I181018 08:38:21.545727 61888 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:54][Step 2/2] I181018 08:38:21.566044 61888 storage/store.go:2580 [replicaGC,s2,r1/3:/M{in-ax}] removing replica r1/3
[08:42:54][Step 2/2] I181018 08:38:21.567539 61888 storage/replica.go:863 [replicaGC,s2,r1/3:/M{in-ax}] removed 50 (45+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:54][Step 2/2] W181018 08:38:21.596820 62247 storage/store.go:1490 [s3,r1/4:/M{in-ax}] could not gossip first range descriptor: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:21.621854 62318 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: rpc error: code = Canceled desc = context canceled:
[08:42:54][Step 2/2] W181018 08:38:21.623025 62451 storage/raft_transport.go:584 while processing outgoing Raft queue to node 3: rpc error: code = Canceled desc = context canceled:
[08:42:54][Step 2/2] --- PASS: TestReplicateAddAndRemove (0.96s)
[08:42:54][Step 2/2] === RUN TestReplicateRemoveAndAdd
[08:42:54][Step 2/2] I181018 08:38:21.693554 62541 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:35817" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:21.758499 62541 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:21.759837 62541 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:43663" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:21.761660 62460 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:35817
[08:42:54][Step 2/2] W181018 08:38:21.816194 62541 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:21.817415 62541 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:39081" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:21.819287 62461 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:35817
[08:42:54][Step 2/2] W181018 08:38:21.876015 62541 gossip/gossip.go:1496 [n4] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:21.876828 62541 gossip/gossip.go:393 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:40021" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:21.879031 62975 gossip/client.go:129 [n4] started gossip client to 127.0.0.1:35817
[08:42:54][Step 2/2] I181018 08:38:21.903847 62541 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 4871f527 at applied index 17
[08:42:54][Step 2/2] I181018 08:38:21.907273 62541 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n4,s4):?: kv pairs: 50, log entries: 7, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:21.910278 62763 storage/replica_raftstorage.go:804 [s4,r1/?:{-}] applying preemptive snapshot at index 17 (id=4871f527, encoded size=8514, 1 rocksdb batches, 7 log entries)
[08:42:54][Step 2/2] I181018 08:38:21.913946 62763 storage/replica_raftstorage.go:810 [s4,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:21.918032 62541 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n4,s4):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:21.928907 62541 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=fc929fe6] proposing ADD_REPLICA((n4,s4):2): updated=[(n1,s1):1 (n4,s4):2] next=3
[08:42:54][Step 2/2] I181018 08:38:21.948310 62541 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot adf245c7 at applied index 19
[08:42:54][Step 2/2] I181018 08:38:21.949968 62541 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 53, log entries: 9, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:21.952455 63004 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 19 (id=adf245c7, encoded size=9456, 1 rocksdb batches, 9 log entries)
[08:42:54][Step 2/2] I181018 08:38:21.957012 63004 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:21.960550 62541 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n4,s4):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:21.985699 62541 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b196e956] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n4,s4):2 (n2,s2):3] next=4
[08:42:54][Step 2/2] I181018 08:38:22.299809 62541 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n4,s4):2, (n2,s2):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:22.314067 62541 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=c2e97975] proposing REMOVE_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n4,s4):2] next=4
[08:42:54][Step 2/2] I181018 08:38:22.337302 62541 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 7d94fa98 at applied index 27
[08:42:54][Step 2/2] I181018 08:38:22.339114 62541 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 63, log entries: 17, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:22.341736 63044 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 27 (id=7d94fa98, encoded size=12045, 1 rocksdb batches, 17 log entries)
[08:42:54][Step 2/2] I181018 08:38:22.357931 63044 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 16ms [clear=0ms batch=0ms entries=14ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:22.361676 62541 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n4,s4):2, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:22.386000 62541 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6d0c5cb0] proposing ADD_REPLICA((n3,s3):4): updated=[(n1,s1):1 (n4,s4):2 (n3,s3):4] next=5
[08:42:54][Step 2/2] I181018 08:38:22.711475 62541 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:54][Step 2/2] I181018 08:38:22.728955 62541 storage/store.go:2580 [replicaGC,s2,r1/3:/M{in-ax}] removing replica r1/3
[08:42:54][Step 2/2] I181018 08:38:22.730654 62541 storage/replica.go:863 [replicaGC,s2,r1/3:/M{in-ax}] removed 50 (45+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] W181018 08:38:22.737133 62873 internal/client/txn.go:532 [hb] failure aborting transaction: node unavailable; try another peer; abort caused by: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:22.737523 62873 storage/node_liveness.go:454 [hb] failed node liveness heartbeat: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:22.739413 63123 internal/client/txn.go:532 [hb] failure aborting transaction: node unavailable; try another peer; abort caused by: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:22.739897 63123 storage/node_liveness.go:454 [hb] failed node liveness heartbeat: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:22.767423 62962 storage/store.go:1490 [s4,r1/2:/M{in-ax}] could not gossip first range descriptor: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:22.769752 63009 storage/raft_transport.go:282 unable to accept Raft message from (n4,s4):?: no handler registered for (n1,s1):?
[08:42:54][Step 2/2] W181018 08:38:22.771564 62983 storage/store.go:3662 [s4] raft error: node 1 claims to not contain store 1 for replica (n1,s1):?: store 1 was not found
[08:42:54][Step 2/2] W181018 08:38:22.772127 63008 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: store 1 was not found:
[08:42:54][Step 2/2] --- PASS: TestReplicateRemoveAndAdd (1.17s)
[08:42:54][Step 2/2] === RUN TestQuotaPool
[08:42:54][Step 2/2] I181018 08:38:22.853712 63141 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:34163" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:22.899174 63141 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:22.900680 63141 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:38199" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:22.901467 62290 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:34163
[08:42:54][Step 2/2] W181018 08:38:22.960538 63141 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:22.961651 63141 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:40151" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:22.963196 63492 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:34163
[08:42:54][Step 2/2] I181018 08:38:22.988118 63141 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot f480b784 at applied index 16
[08:42:54][Step 2/2] I181018 08:38:22.989379 63141 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:38:22.991354 63484 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=f480b784, encoded size=8362, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:22.994756 63484 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:22.998873 63141 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:23.010307 63141 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=64b46fa4] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:23.021821 63141 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 51d7281f at applied index 18
[08:42:54][Step 2/2] I181018 08:38:23.023245 63141 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:23.025780 63370 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=51d7281f, encoded size=9304, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:23.029827 63370 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:23.033729 63141 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:23.056848 63141 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b5c1f133] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] --- PASS: TestQuotaPool (0.61s)
[08:42:54][Step 2/2] === RUN TestWedgedReplicaDetection
[08:42:54][Step 2/2] I181018 08:38:23.472318 63555 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:42617" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:23.524600 63555 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:23.526256 63555 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:40331" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:23.528367 63726 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:42617
[08:42:54][Step 2/2] W181018 08:38:23.580844 63555 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:23.581695 63555 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:41035" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:23.585437 63127 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:42617
[08:42:54][Step 2/2] I181018 08:38:23.608312 63555 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 42d75b68 at applied index 16
[08:42:54][Step 2/2] I181018 08:38:23.609865 63555 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:38:23.611228 63878 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=42d75b68, encoded size=8362, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:23.613679 63878 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:23.616481 63555 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:23.624101 63555 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=ef27c8b5] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:23.636392 63555 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 81440e98 at applied index 18
[08:42:54][Step 2/2] I181018 08:38:23.638277 63555 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:23.639956 63546 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=81440e98, encoded size=9304, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:23.644091 63546 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:23.649822 63555 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:23.669628 63555 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=a0183b82] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] --- PASS: TestWedgedReplicaDetection (0.60s)
[08:42:54][Step 2/2] === RUN TestRaftHeartbeats
[08:42:54][Step 2/2] I181018 08:38:24.061807 63880 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:36335" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:24.106382 63880 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:24.107427 63880 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:34183" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:24.110472 63871 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:36335
[08:42:54][Step 2/2] W181018 08:38:24.177665 63880 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:24.178931 63880 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:44673" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:24.180760 64174 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:36335
[08:42:54][Step 2/2] I181018 08:38:24.204547 63880 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot a05cdc6b at applied index 16
[08:42:54][Step 2/2] I181018 08:38:24.206364 63880 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:24.207906 64246 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=a05cdc6b, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:24.210201 64246 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:24.212927 63880 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:24.221313 63880 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=ba08169e] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:24.233698 63880 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 2b0bdc41 at applied index 18
[08:42:54][Step 2/2] I181018 08:38:24.235041 63880 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:24.237104 64247 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=2b0bdc41, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:24.240754 64247 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:24.243825 63880 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:24.263053 63880 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=e2dd305f] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] W181018 08:38:25.081291 64229 storage/raft_transport.go:584 while processing outgoing Raft queue to node 2: EOF:
[08:42:54][Step 2/2] W181018 08:38:25.081339 64230 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: rpc error: code = Unavailable desc = transport is closing:
[08:42:54][Step 2/2] --- PASS: TestRaftHeartbeats (1.08s)
[08:42:54][Step 2/2] === RUN TestReportUnreachableHeartbeats
[08:42:54][Step 2/2] I181018 08:38:25.161041 64254 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:33989" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:25.224877 64254 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:25.225660 64254 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:35969" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:25.231877 64520 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:33989
[08:42:54][Step 2/2] W181018 08:38:25.343817 64254 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:25.344997 64254 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:35121" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:25.346634 64618 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:33989
[08:42:54][Step 2/2] I181018 08:38:25.430912 64254 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 848598db at applied index 16
[08:42:54][Step 2/2] I181018 08:38:25.432504 64254 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:25.434454 64426 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=848598db, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:25.437882 64426 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:25.447891 64254 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:25.456262 64254 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=34faa76f] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:25.467294 64254 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot b38313d3 at applied index 18
[08:42:54][Step 2/2] I181018 08:38:25.469010 64254 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:25.470714 64653 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=b38313d3, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:25.474122 64653 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:25.477413 64254 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:25.507321 64254 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=a65f037d] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] W181018 08:38:26.259317 64619 internal/client/txn.go:532 [hb] failure aborting transaction: node unavailable; try another peer; abort caused by: result is ambiguous (server shutdown)
[08:42:54][Step 2/2] I181018 08:38:26.259644 64619 storage/node_liveness.go:790 [hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (server shutdown)
[08:42:54][Step 2/2] W181018 08:38:26.262387 64619 internal/client/txn.go:532 [hb] failure aborting transaction: node unavailable; try another peer; abort caused by: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:26.262742 64619 storage/node_liveness.go:454 [hb] failed node liveness heartbeat: node unavailable; try another peer
[08:42:54][Step 2/2] --- PASS: TestReportUnreachableHeartbeats (1.19s)
[08:42:54][Step 2/2] === RUN TestReportUnreachableRemoveRace
[08:42:54][Step 2/2] I181018 08:38:26.379453 64415 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:42547" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:26.429141 64415 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:26.430215 64415 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45611" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:26.434163 64866 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:42547
[08:42:54][Step 2/2] W181018 08:38:26.491679 64415 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:26.492750 64415 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:37643" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:26.494682 64930 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:42547
[08:42:54][Step 2/2] I181018 08:38:26.571543 64415 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 4b8ba02e at applied index 16
[08:42:54][Step 2/2] I181018 08:38:26.572899 64415 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:38:26.574145 65004 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=4b8ba02e, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:26.576527 65004 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:26.579204 64415 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:26.589594 64415 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=8932e310] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:26.601992 64415 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 1108f7b6 at applied index 18
[08:42:54][Step 2/2] I181018 08:38:26.604659 64415 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:26.606455 65006 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=1108f7b6, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:26.609869 65006 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:26.613197 64415 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:26.631624 64415 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=a1774960] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:26.825833 64415 storage/replica_command.go:816 [s2,r1/2:/M{in-ax}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:26.894372 64415 storage/replica.go:3884 [s2,r1/2:/M{in-ax},txn=cfd79234] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n3,s3):3 (n2,s2):2] next=4
[08:42:54][Step 2/2] I181018 08:38:26.906510 64431 storage/store.go:3640 [s1,r1/1:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:26.906691 65059 storage/store.go:2580 [replicaGC,s1,r1/1:/M{in-ax}] removing replica r1/1
[08:42:54][Step 2/2] I181018 08:38:26.907420 65038 storage/store.go:3640 [s1,r1/1:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:26.908765 65059 storage/replica.go:863 [replicaGC,s1,r1/1:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:26.961006 64415 storage/store_snapshot.go:621 [s2,r1/2:/M{in-ax}] sending preemptive snapshot 54097efa at applied index 27
[08:42:54][Step 2/2] I181018 08:38:26.962854 64415 storage/store_snapshot.go:664 [s2,r1/2:/M{in-ax}] streamed snapshot to (n1,s1):?: kv pairs: 60, log entries: 17, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:26.966236 65046 storage/replica_raftstorage.go:804 [s1,r1/?:{-}] applying preemptive snapshot at index 27 (id=54097efa, encoded size=12090, 1 rocksdb batches, 17 log entries)
[08:42:54][Step 2/2] I181018 08:38:26.973377 65046 storage/replica_raftstorage.go:810 [s1,r1/?:/M{in-ax}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=6ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:26.978026 64415 storage/replica_command.go:816 [s2,r1/2:/M{in-ax}] change replicas (ADD_REPLICA (n1,s1):4): read existing descriptor r1:/M{in-ax} [(n3,s3):3, (n2,s2):2, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:26.990238 64415 storage/replica.go:3884 [s2,r1/2:/M{in-ax},txn=53a60417] proposing ADD_REPLICA((n1,s1):4): updated=[(n3,s3):3 (n2,s2):2 (n1,s1):4] next=5
[08:42:54][Step 2/2] I181018 08:38:27.185338 64415 storage/replica_command.go:816 [s1,r1/4:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n3,s3):3, (n2,s2):2, (n1,s1):4, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:38:27.202297 64415 storage/replica.go:3884 [s1,r1/4:/M{in-ax},txn=cc7e79d3] proposing REMOVE_REPLICA((n2,s2):2): updated=[(n3,s3):3 (n1,s1):4] next=5
[08:42:54][Step 2/2] I181018 08:38:27.225920 65016 storage/store.go:2580 [replicaGC,s2,r1/2:/M{in-ax}] removing replica r1/2
[08:42:54][Step 2/2] I181018 08:38:27.227450 64433 storage/store.go:3640 [s2,r1/2:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:27.227697 65016 storage/replica.go:863 [replicaGC,s2,r1/2:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:27.286125 64415 storage/store_snapshot.go:621 [s1,r1/4:/M{in-ax}] sending preemptive snapshot e5d9a1ee at applied index 37
[08:42:54][Step 2/2] I181018 08:38:27.288137 64415 storage/store_snapshot.go:664 [s1,r1/4:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 69, log entries: 27, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:27.290532 64889 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 37 (id=e5d9a1ee, encoded size=15070, 1 rocksdb batches, 27 log entries)
[08:42:54][Step 2/2] I181018 08:38:27.298790 64889 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 8ms [clear=0ms batch=0ms entries=6ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:27.303105 64415 storage/replica_command.go:816 [s1,r1/4:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):5): read existing descriptor r1:/M{in-ax} [(n3,s3):3, (n1,s1):4, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:38:27.314776 64415 storage/replica.go:3884 [s1,r1/4:/M{in-ax},txn=14438c50] proposing ADD_REPLICA((n2,s2):5): updated=[(n3,s3):3 (n1,s1):4 (n2,s2):5] next=6
[08:42:54][Step 2/2] I181018 08:38:27.392950 64415 storage/replica_command.go:816 [s2,r1/5:/M{in-ax}] change replicas (REMOVE_REPLICA (n1,s1):4): read existing descriptor r1:/M{in-ax} [(n3,s3):3, (n1,s1):4, (n2,s2):5, next=6, gen=0]
[08:42:54][Step 2/2] I181018 08:38:27.409039 64415 storage/replica.go:3884 [s2,r1/5:/M{in-ax},txn=aeab2e43] proposing REMOVE_REPLICA((n1,s1):4): updated=[(n3,s3):3 (n2,s2):5] next=6
[08:42:54][Step 2/2] I181018 08:38:27.418933 64431 storage/store.go:3640 [s1,r1/4:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:27.419724 64894 storage/store.go:2580 [replicaGC,s1,r1/4:/M{in-ax}] removing replica r1/4
[08:42:54][Step 2/2] I181018 08:38:27.421304 64894 storage/replica.go:863 [replicaGC,s1,r1/4:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:27.478794 64415 storage/store_snapshot.go:621 [s2,r1/5:/M{in-ax}] sending preemptive snapshot 1dd79471 at applied index 46
[08:42:54][Step 2/2] I181018 08:38:27.481352 64415 storage/store_snapshot.go:664 [s2,r1/5:/M{in-ax}] streamed snapshot to (n1,s1):?: kv pairs: 78, log entries: 36, rate-limit: 2.0 MiB/sec, 9ms
[08:42:54][Step 2/2] I181018 08:38:27.487029 65080 storage/replica_raftstorage.go:804 [s1,r1/?:{-}] applying preemptive snapshot at index 46 (id=1dd79471, encoded size=17710, 1 rocksdb batches, 36 log entries)
[08:42:54][Step 2/2] I181018 08:38:27.503436 65080 storage/replica_raftstorage.go:810 [s1,r1/?:/M{in-ax}] applied preemptive snapshot in 16ms [clear=0ms batch=0ms entries=14ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:27.508738 64415 storage/replica_command.go:816 [s2,r1/5:/M{in-ax}] change replicas (ADD_REPLICA (n1,s1):6): read existing descriptor r1:/M{in-ax} [(n3,s3):3, (n2,s2):5, next=6, gen=0]
[08:42:54][Step 2/2] I181018 08:38:27.522543 64415 storage/replica.go:3884 [s2,r1/5:/M{in-ax},txn=d9d42651] proposing ADD_REPLICA((n1,s1):6): updated=[(n3,s3):3 (n2,s2):5 (n1,s1):6] next=7
[08:42:54][Step 2/2] I181018 08:38:27.831720 64415 storage/replica_command.go:816 [s1,r1/6:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):5): read existing descriptor r1:/M{in-ax} [(n3,s3):3, (n2,s2):5, (n1,s1):6, next=7, gen=0]
[08:42:54][Step 2/2] I181018 08:38:27.845787 64415 storage/replica.go:3884 [s1,r1/6:/M{in-ax},txn=7e75587a] proposing REMOVE_REPLICA((n2,s2):5): updated=[(n3,s3):3 (n1,s1):6] next=7
[08:42:54][Step 2/2] I181018 08:38:27.872262 65038 storage/store.go:3640 [s2,r1/5:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:27.873319 65083 storage/store.go:2580 [replicaGC,s2,r1/5:/M{in-ax}] removing replica r1/5
[08:42:54][Step 2/2] I181018 08:38:27.874057 64433 storage/store.go:3640 [s2,r1/5:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:27.876170 65083 storage/replica.go:863 [replicaGC,s2,r1/5:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:27.878653 64431 storage/store.go:3659 [s1,r1/6:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:27.928399 64415 storage/store_snapshot.go:621 [s1,r1/6:/M{in-ax}] sending preemptive snapshot 6c4850bc at applied index 55
[08:42:54][Step 2/2] I181018 08:38:27.930457 64415 storage/store_snapshot.go:664 [s1,r1/6:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 88, log entries: 45, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:27.932094 64897 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 55 (id=6c4850bc, encoded size=20514, 1 rocksdb batches, 45 log entries)
[08:42:54][Step 2/2] I181018 08:38:27.942869 64897 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 11ms [clear=0ms batch=0ms entries=9ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:27.945988 64415 storage/replica_command.go:816 [s1,r1/6:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):7): read existing descriptor r1:/M{in-ax} [(n3,s3):3, (n1,s1):6, next=7, gen=0]
[08:42:54][Step 2/2] I181018 08:38:28.188346 64415 storage/replica.go:3884 [s1,r1/6:/M{in-ax},txn=8ac93eec] proposing ADD_REPLICA((n2,s2):7): updated=[(n3,s3):3 (n1,s1):6 (n2,s2):7] next=8
[08:42:54][Step 2/2] I181018 08:38:28.259597 64415 storage/replica_command.go:816 [s2,r1/7:/M{in-ax}] change replicas (REMOVE_REPLICA (n1,s1):6): read existing descriptor r1:/M{in-ax} [(n3,s3):3, (n1,s1):6, (n2,s2):7, next=8, gen=0]
[08:42:54][Step 2/2] I181018 08:38:28.282004 64415 storage/replica.go:3884 [s2,r1/7:/M{in-ax},txn=55b9ce37] proposing REMOVE_REPLICA((n1,s1):6): updated=[(n3,s3):3 (n2,s2):7] next=8
[08:42:54][Step 2/2] I181018 08:38:28.292259 64431 storage/store.go:3640 [s1,r1/6:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:28.293171 65054 storage/store.go:2580 [replicaGC,s1,r1/6:/M{in-ax}] removing replica r1/6
[08:42:54][Step 2/2] I181018 08:38:28.294667 65054 storage/replica.go:863 [replicaGC,s1,r1/6:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:28.347825 64415 storage/store_snapshot.go:621 [s2,r1/7:/M{in-ax}] sending preemptive snapshot 2ba6e23b at applied index 67
[08:42:54][Step 2/2] I181018 08:38:28.350149 64415 storage/store_snapshot.go:664 [s2,r1/7:/M{in-ax}] streamed snapshot to (n1,s1):?: kv pairs: 97, log entries: 57, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:28.353987 65087 storage/replica_raftstorage.go:804 [s1,r1/?:{-}] applying preemptive snapshot at index 67 (id=2ba6e23b, encoded size=23619, 1 rocksdb batches, 57 log entries)
[08:42:54][Step 2/2] I181018 08:38:28.368272 65087 storage/replica_raftstorage.go:810 [s1,r1/?:/M{in-ax}] applied preemptive snapshot in 14ms [clear=0ms batch=0ms entries=13ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:28.372920 64415 storage/replica_command.go:816 [s2,r1/7:/M{in-ax}] change replicas (ADD_REPLICA (n1,s1):8): read existing descriptor r1:/M{in-ax} [(n3,s3):3, (n2,s2):7, next=8, gen=0]
[08:42:54][Step 2/2] I181018 08:38:28.396936 64415 storage/replica.go:3884 [s2,r1/7:/M{in-ax},txn=a11f20e0] proposing ADD_REPLICA((n1,s1):8): updated=[(n3,s3):3 (n2,s2):7 (n1,s1):8] next=9
[08:42:54][Step 2/2] --- PASS: TestReportUnreachableRemoveRace (2.23s)
[08:42:54][Step 2/2] === RUN TestReplicateAfterSplit
[08:42:54][Step 2/2] I181018 08:38:28.588515 64898 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:35277" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:28.638857 64898 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:28.639935 64898 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:39405" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:28.641854 65132 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:35277
[08:42:54][Step 2/2] I181018 08:38:28.663574 64898 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "m" [r2]
[08:42:54][Step 2/2] I181018 08:38:28.694194 64898 storage/store_snapshot.go:621 [s1,r2/1:{m-/Max}] sending preemptive snapshot d87967db at applied index 14
[08:42:54][Step 2/2] I181018 08:38:28.695454 64898 storage/store_snapshot.go:664 [s1,r2/1:{m-/Max}] streamed snapshot to (n2,s2):?: kv pairs: 43, log entries: 4, rate-limit: 2.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:38:28.696823 65133 storage/replica_raftstorage.go:804 [s2,r2/?:{-}] applying preemptive snapshot at index 14 (id=d87967db, encoded size=7710, 1 rocksdb batches, 4 log entries)
[08:42:54][Step 2/2] I181018 08:38:28.698874 65133 storage/replica_raftstorage.go:810 [s2,r2/?:{m-/Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:28.702187 64898 storage/replica_command.go:816 [s1,r2/1:{m-/Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:{m-/Max} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:28.721328 64898 storage/replica.go:3884 [s1,r2/1:{m-/Max},txn=aba9c019] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] --- PASS: TestReplicateAfterSplit (0.27s)
[08:42:54][Step 2/2] === RUN TestReplicaRemovalCampaign
[08:42:54][Step 2/2] I181018 08:38:28.847965 65055 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:38307" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:28.900614 65055 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:28.901555 65055 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:35287" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:28.903275 65511 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:38307
[08:42:54][Step 2/2] I181018 08:38:28.922316 65055 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 2bbd698e at applied index 15
[08:42:54][Step 2/2] I181018 08:38:28.924048 65055 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 48, log entries: 5, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:28.925466 65514 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 15 (id=2bbd698e, encoded size=8165, 1 rocksdb batches, 5 log entries)
[08:42:54][Step 2/2] I181018 08:38:28.928317 65514 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:28.931917 65055 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:28.940025 65055 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=eb7927f0] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:29.229850 65055 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "m" [r2]
[08:42:54][Step 2/2] I181018 08:38:29.268267 65055 storage/store.go:2580 removing replica r2/1
[08:42:54][Step 2/2] I181018 08:38:29.269432 65055 storage/replica.go:863 removed 42 (37+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:29.275138 65390 storage/store.go:3659 [s2,r2/2:{m-/Max}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:29.490478 65390 storage/store.go:3659 [s2,r2/2:{m-/Max}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:29.791295 65390 storage/store.go:3659 [s2,r2/2:{m-/Max}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:30.090092 65390 storage/store.go:3659 [s2,r2/2:{m-/Max}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:30.390350 65055 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41013" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:30.443667 65055 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:30.444542 65055 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45617" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:30.446982 65804 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:41013
[08:42:54][Step 2/2] I181018 08:38:30.482669 65055 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 3f707753 at applied index 15
[08:42:54][Step 2/2] I181018 08:38:30.489196 65055 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 48, log entries: 5, rate-limit: 2.0 MiB/sec, 21ms
[08:42:54][Step 2/2] I181018 08:38:30.493252 65907 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 15 (id=3f707753, encoded size=8165, 1 rocksdb batches, 5 log entries)
[08:42:54][Step 2/2] I181018 08:38:30.495658 65907 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:30.499225 65055 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:30.508183 65055 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0f68f2db] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:30.662526 65055 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "m" [r2]
[08:42:54][Step 2/2] --- PASS: TestReplicaRemovalCampaign (1.99s)
[08:42:54][Step 2/2] === RUN TestRaftAfterRemoveRange
[08:42:54][Step 2/2] I181018 08:38:30.856207 65955 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41907" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:30.912799 65955 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:30.913894 65955 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45475" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:30.915825 65533 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:41907
[08:42:54][Step 2/2] W181018 08:38:30.982051 65955 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:30.982777 65955 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:36029" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:30.985683 65896 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:41907
[08:42:54][Step 2/2] I181018 08:38:31.012358 65955 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:54][Step 2/2] I181018 08:38:31.047259 65955 storage/store_snapshot.go:621 [s1,r2/1:{b-/Max}] sending preemptive snapshot ea363d24 at applied index 11
[08:42:54][Step 2/2] I181018 08:38:31.048715 65955 storage/store_snapshot.go:664 [s1,r2/1:{b-/Max}] streamed snapshot to (n2,s2):?: kv pairs: 42, log entries: 1, rate-limit: 2.0 MiB/sec, 8ms
[08:42:54][Step 2/2] I181018 08:38:31.050159 66107 storage/replica_raftstorage.go:804 [s2,r2/?:{-}] applying preemptive snapshot at index 11 (id=ea363d24, encoded size=7463, 1 rocksdb batches, 1 log entries)
[08:42:54][Step 2/2] I181018 08:38:31.051929 66107 storage/replica_raftstorage.go:810 [s2,r2/?:{b-/Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:31.058028 65955 storage/replica_command.go:816 [s1,r2/1:{b-/Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:{b-/Max} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:31.075098 65955 storage/replica.go:3884 [s1,r2/1:{b-/Max},txn=b0ca84ee] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:31.087291 65955 storage/store_snapshot.go:621 [s1,r2/1:{b-/Max}] sending preemptive snapshot c41357db at applied index 14
[08:42:54][Step 2/2] I181018 08:38:31.088940 65955 storage/store_snapshot.go:664 [s1,r2/1:{b-/Max}] streamed snapshot to (n3,s3):?: kv pairs: 44, log entries: 4, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:31.092018 66108 storage/replica_raftstorage.go:804 [s3,r2/?:{-}] applying preemptive snapshot at index 14 (id=c41357db, encoded size=8356, 1 rocksdb batches, 4 log entries)
[08:42:54][Step 2/2] I181018 08:38:31.095551 66108 storage/replica_raftstorage.go:810 [s3,r2/?:{b-/Max}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:31.101397 65955 storage/replica_command.go:816 [s1,r2/1:{b-/Max}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r2:{b-/Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:31.124848 65955 storage/replica.go:3884 [s1,r2/1:{b-/Max},txn=fc5daaea] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:31.181498 65955 storage/replica_command.go:816 [s1,r2/1:{b-/Max}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r2:{b-/Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:31.196497 65955 storage/replica.go:3884 [s1,r2/1:{b-/Max},txn=9d7a2bd4] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2] next=4
[08:42:54][Step 2/2] I181018 08:38:31.209353 65955 storage/replica_command.go:816 [s1,r2/1:{b-/Max}] change replicas (REMOVE_REPLICA (n2,s2):2): read existing descriptor r2:{b-/Max} [(n1,s1):1, (n2,s2):2, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:31.211677 66263 storage/store.go:3640 [s3,r2/3:{b-/Max}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:31.214192 66173 storage/store.go:2580 [replicaGC,s3,r2/3:{b-/Max}] removing replica r2/3
[08:42:54][Step 2/2] I181018 08:38:31.215721 66173 storage/replica.go:863 [replicaGC,s3,r2/3:{b-/Max}] removed 42 (36+6) keys in 1ms [clear=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:31.234569 65955 storage/replica.go:3884 [s1,r2/1:{b-/Max},txn=07f08368] proposing REMOVE_REPLICA((n2,s2):2): updated=[(n1,s1):1] next=4
[08:42:54][Step 2/2] I181018 08:38:31.243482 66263 storage/store.go:3640 [s2,r2/2:{b-/Max}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:31.248447 66176 storage/store.go:2580 [replicaGC,s2,r2/2:{b-/Max}] removing replica r2/2
[08:42:54][Step 2/2] I181018 08:38:31.249878 66176 storage/replica.go:863 [replicaGC,s2,r2/2:{b-/Max}] removed 42 (36+6) keys in 1ms [clear=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:31.250688 65955 storage/store_snapshot.go:621 [s1,r1/1:{/Min-b}] sending preemptive snapshot 40bd54e9 at applied index 27
[08:42:54][Step 2/2] I181018 08:38:31.252180 65955 storage/store_snapshot.go:664 [s1,r1/1:{/Min-b}] streamed snapshot to (n2,s2):?: kv pairs: 22, log entries: 17, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:31.253816 66162 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 27 (id=40bd54e9, encoded size=4252, 1 rocksdb batches, 17 log entries)
[08:42:54][Step 2/2] I181018 08:38:31.258637 66162 storage/replica_raftstorage.go:810 [s2,r1/?:{/Min-b}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=4ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:31.261724 65955 storage/replica_command.go:816 [s1,r1/1:{/Min-b}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:{/Min-b} [(n1,s1):1, next=2, gen=1]
[08:42:54][Step 2/2] I181018 08:38:31.270485 65955 storage/replica.go:3884 [s1,r1/1:{/Min-b},txn=22ac0c37] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:31.344940 65955 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:54][Step 2/2] W181018 08:38:31.353224 66142 storage/store.go:1490 [s2,r1/2:{/Min-b}] could not gossip first range descriptor: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:31.381461 66261 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: EOF:
[08:42:54][Step 2/2] --- PASS: TestRaftAfterRemoveRange (0.61s)
[08:42:54][Step 2/2] === RUN TestRaftRemoveRace
[08:42:54][Step 2/2] I181018 08:38:31.448937 66277 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:38421" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:31.505642 66277 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:31.506872 66277 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45329" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:31.511567 66415 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:38421
[08:42:54][Step 2/2] W181018 08:38:31.622995 66277 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:31.624251 66277 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:40823" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:31.626231 66631 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:38421
[08:42:54][Step 2/2] I181018 08:38:31.662530 66277 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 816cef26 at applied index 16
[08:42:54][Step 2/2] I181018 08:38:31.664279 66277 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:31.666744 66677 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=816cef26, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:31.670262 66677 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:31.673696 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:31.683376 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=fe6afa96] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:31.696494 66277 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 67118f87 at applied index 18
[08:42:54][Step 2/2] I181018 08:38:31.697956 66277 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:31.699559 66633 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=67118f87, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:31.703268 66633 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:31.724765 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:31.747333 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=59ad4f7f] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:31.896327 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:31.912607 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=bb98aa1c] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2] next=4
[08:42:54][Step 2/2] W181018 08:38:31.923695 66347 storage/replica.go:5048 [s1,r1/1:/M{in-ax}] failed to look up recipient replica 3 in r1 while sending MsgApp: replica 3 not present in (n2,s2):2, [(n1,s1):1 (n2,s2):2]
[08:42:54][Step 2/2] I181018 08:38:31.924656 66695 storage/store.go:2580 [replicaGC,s3,r1/3:/M{in-ax}] removing replica r1/3
[08:42:54][Step 2/2] I181018 08:38:31.926353 66536 storage/store.go:3640 [s3,r1/3:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:31.926798 66536 storage/store.go:3640 [s3,r1/3:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:31.927005 66695 storage/replica.go:863 [replicaGC,s3,r1/3:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:31.930042 66277 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 95088258 at applied index 22
[08:42:54][Step 2/2] I181018 08:38:31.933927 66277 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 58, log entries: 12, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:31.937165 66636 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 22 (id=95088258, encoded size=11231, 1 rocksdb batches, 12 log entries)
[08:42:54][Step 2/2] I181018 08:38:31.941976 66636 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:31.945603 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:31.957878 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=c6c34c5f] proposing ADD_REPLICA((n3,s3):4): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):4] next=5
[08:42:54][Step 2/2] I181018 08:38:31.990032 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):4, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:38:32.002606 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=d8a7f9ee] proposing REMOVE_REPLICA((n3,s3):4): updated=[(n1,s1):1 (n2,s2):2] next=5
[08:42:54][Step 2/2] I181018 08:38:32.013297 66536 storage/store.go:3640 [s3,r1/4:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:32.015939 66699 storage/store.go:2580 [replicaGC,s3,r1/4:/M{in-ax}] removing replica r1/4
[08:42:54][Step 2/2] I181018 08:38:32.018302 66699 storage/replica.go:863 [replicaGC,s3,r1/4:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:32.020014 66277 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot dafa1884 at applied index 28
[08:42:54][Step 2/2] I181018 08:38:32.021589 66277 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 66, log entries: 18, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:32.023511 66506 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 28 (id=dafa1884, encoded size=13526, 1 rocksdb batches, 18 log entries)
[08:42:54][Step 2/2] I181018 08:38:32.030799 66506 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=6ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:32.034618 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):5): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:38:32.049346 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=ec98d313] proposing ADD_REPLICA((n3,s3):5): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):5] next=6
[08:42:54][Step 2/2] I181018 08:38:32.108292 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):5): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):5, next=6, gen=0]
[08:42:54][Step 2/2] I181018 08:38:32.132957 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=be1fc349] proposing REMOVE_REPLICA((n3,s3):5): updated=[(n1,s1):1 (n2,s2):2] next=6
[08:42:54][Step 2/2] I181018 08:38:32.147907 66542 storage/store.go:2580 [replicaGC,s3,r1/5:/M{in-ax}] removing replica r1/5
[08:42:54][Step 2/2] I181018 08:38:32.148261 66536 storage/store.go:3640 [s3,r1/5:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:32.149192 66536 storage/store.go:3640 [s3,r1/5:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:32.150469 66542 storage/replica.go:863 [replicaGC,s3,r1/5:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:32.157380 66277 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 4a600686 at applied index 33
[08:42:54][Step 2/2] I181018 08:38:32.159487 66277 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 73, log entries: 23, rate-limit: 2.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:38:32.161685 66545 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 33 (id=4a600686, encoded size=15649, 1 rocksdb batches, 23 log entries)
[08:42:54][Step 2/2] I181018 08:38:32.170640 66545 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 9ms [clear=0ms batch=0ms entries=7ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:32.174501 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):6): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=6, gen=0]
[08:42:54][Step 2/2] I181018 08:38:32.188115 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0d8c7837] proposing ADD_REPLICA((n3,s3):6): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):6] next=7
[08:42:54][Step 2/2] I181018 08:38:32.191159 66539 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:32.342387 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):6): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):6, next=7, gen=0]
[08:42:54][Step 2/2] I181018 08:38:32.357925 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0c367cd3] proposing REMOVE_REPLICA((n3,s3):6): updated=[(n1,s1):1 (n2,s2):2] next=7
[08:42:54][Step 2/2] I181018 08:38:32.392447 66536 storage/store.go:3640 [s3,r1/6:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:32.394963 66704 storage/store.go:2580 [replicaGC,s3,r1/6:/M{in-ax}] removing replica r1/6
[08:42:54][Step 2/2] I181018 08:38:32.396239 66277 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot de52d35b at applied index 37
[08:42:54][Step 2/2] I181018 08:38:32.397868 66704 storage/replica.go:863 [replicaGC,s3,r1/6:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=1ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:32.400231 66539 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:32.400519 66277 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 79, log entries: 27, rate-limit: 2.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:38:32.403566 66718 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 37 (id=de52d35b, encoded size=17600, 1 rocksdb batches, 27 log entries)
[08:42:54][Step 2/2] I181018 08:38:32.429865 66718 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 26ms [clear=0ms batch=0ms entries=24ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:32.433650 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):7): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=7, gen=0]
[08:42:54][Step 2/2] I181018 08:38:32.447563 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=bb8cf224] proposing ADD_REPLICA((n3,s3):7): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):7] next=8
[08:42:54][Step 2/2] I181018 08:38:32.598168 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):7): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):7, next=8, gen=0]
[08:42:54][Step 2/2] I181018 08:38:32.619803 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=1d5fa6bc] proposing REMOVE_REPLICA((n3,s3):7): updated=[(n1,s1):1 (n2,s2):2] next=8
[08:42:54][Step 2/2] I181018 08:38:32.629554 66536 storage/store.go:3640 [s3,r1/7:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:32.631756 66536 storage/store.go:3640 [s3,r1/7:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:32.632214 66530 storage/store.go:2580 [replicaGC,s3,r1/7:/M{in-ax}] removing replica r1/7
[08:42:54][Step 2/2] I181018 08:38:32.634708 66530 storage/replica.go:863 [replicaGC,s3,r1/7:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:32.637418 66277 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot caf89fe7 at applied index 44
[08:42:54][Step 2/2] I181018 08:38:32.639723 66277 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 88, log entries: 34, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:32.642261 66722 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 44 (id=caf89fe7, encoded size=20061, 1 rocksdb batches, 34 log entries)
[08:42:54][Step 2/2] I181018 08:38:32.655270 66722 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 13ms [clear=0ms batch=0ms entries=11ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:32.658553 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):8): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=8, gen=0]
[08:42:54][Step 2/2] I181018 08:38:32.674572 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=5c974db2] proposing ADD_REPLICA((n3,s3):8): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):8] next=9
[08:42:54][Step 2/2] I181018 08:38:32.830591 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):8): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):8, next=9, gen=0]
[08:42:54][Step 2/2] I181018 08:38:32.847444 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=24ddf99f] proposing REMOVE_REPLICA((n3,s3):8): updated=[(n1,s1):1 (n2,s2):2] next=9
[08:42:54][Step 2/2] I181018 08:38:32.861805 66536 storage/store.go:3640 [s3,r1/8:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:32.862357 66536 storage/store.go:3640 [s3,r1/8:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:32.862551 66771 storage/store.go:2580 [replicaGC,s3,r1/8:/M{in-ax}] removing replica r1/8
[08:42:54][Step 2/2] I181018 08:38:32.864532 66771 storage/replica.go:863 [replicaGC,s3,r1/8:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:32.869742 66277 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot ff77012c at applied index 49
[08:42:54][Step 2/2] I181018 08:38:32.872523 66277 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 95, log entries: 39, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:32.875589 66686 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 49 (id=ff77012c, encoded size=22182, 1 rocksdb batches, 39 log entries)
[08:42:54][Step 2/2] I181018 08:38:32.891334 66686 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 15ms [clear=1ms batch=0ms entries=13ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:32.893311 66539 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:32.896257 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):9): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=9, gen=0]
[08:42:54][Step 2/2] I181018 08:38:32.914517 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=16ebf52b] proposing ADD_REPLICA((n3,s3):9): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):9] next=10
[08:42:54][Step 2/2] I181018 08:38:33.006328 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):9): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):9, next=10, gen=0]
[08:42:54][Step 2/2] I181018 08:38:33.020398 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=fc5e8ae5] proposing REMOVE_REPLICA((n3,s3):9): updated=[(n1,s1):1 (n2,s2):2] next=10
[08:42:54][Step 2/2] I181018 08:38:33.032849 66536 storage/store.go:3640 [s3,r1/9:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:33.042055 66687 storage/store.go:2580 [replicaGC,s3,r1/9:/M{in-ax}] removing replica r1/9
[08:42:54][Step 2/2] I181018 08:38:33.043963 66277 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 58522ce8 at applied index 55
[08:42:54][Step 2/2] I181018 08:38:33.043989 66687 storage/replica.go:863 [replicaGC,s3,r1/9:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:33.046221 66277 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 103, log entries: 45, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:33.048013 66728 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 55 (id=58522ce8, encoded size=24563, 1 rocksdb batches, 45 log entries)
[08:42:54][Step 2/2] I181018 08:38:33.062463 66728 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 14ms [clear=0ms batch=0ms entries=12ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:33.066284 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):10): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=10, gen=0]
[08:42:54][Step 2/2] I181018 08:38:33.086453 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=28322b08] proposing ADD_REPLICA((n3,s3):10): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):10] next=11
[08:42:54][Step 2/2] I181018 08:38:33.096865 66521 gossip/gossip.go:1510 [n2] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:38:33.238554 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):10): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):10, next=11, gen=0]
[08:42:54][Step 2/2] I181018 08:38:33.253392 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=29cbd794] proposing REMOVE_REPLICA((n3,s3):10): updated=[(n1,s1):1 (n2,s2):2] next=11
[08:42:54][Step 2/2] I181018 08:38:33.272003 66536 storage/store.go:3640 [s3,r1/10:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:33.272451 66536 storage/store.go:3640 [s3,r1/10:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:33.274713 66277 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 40b695ef at applied index 59
[08:42:54][Step 2/2] I181018 08:38:33.277100 66277 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 109, log entries: 49, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:33.280953 66730 storage/store.go:2580 [replicaGC,s3,r1/10:/M{in-ax}] removing replica r1/10
[08:42:54][Step 2/2] I181018 08:38:33.282691 66730 storage/replica.go:863 [replicaGC,s3,r1/10:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:33.288137 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):11): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=11, gen=0]
[08:42:54][Step 2/2] I181018 08:38:33.318736 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=7643c8ff] proposing ADD_REPLICA((n3,s3):11): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):11] next=12
[08:42:54][Step 2/2] I181018 08:38:33.345784 66689 storage/store_snapshot.go:621 [raftsnapshot,s1,r1/1:/M{in-ax}] sending Raft snapshot 8cc31d19 at applied index 63
[08:42:54][Step 2/2] I181018 08:38:33.347825 66689 storage/store_snapshot.go:664 [raftsnapshot,s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):11: kv pairs: 114, log entries: 53, rate-limit: 8.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:33.349319 66805 storage/replica_raftstorage.go:804 [s3,r1/11:{-}] applying Raft snapshot at index 63 (id=8cc31d19, encoded size=27861, 1 rocksdb batches, 53 log entries)
[08:42:54][Step 2/2] I181018 08:38:33.368037 66805 storage/replica_raftstorage.go:810 [s3,r1/11:/M{in-ax}] applied Raft snapshot in 18ms [clear=1ms batch=0ms entries=16ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:33.377732 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):11): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):11, next=12, gen=0]
[08:42:54][Step 2/2] I181018 08:38:33.390174 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=5a1d2348] proposing REMOVE_REPLICA((n3,s3):11): updated=[(n1,s1):1 (n2,s2):2] next=12
[08:42:54][Step 2/2] I181018 08:38:33.400781 66536 storage/store.go:3640 [s3,r1/11:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:33.403392 66737 storage/store.go:2580 [replicaGC,s3,r1/11:/M{in-ax}] removing replica r1/11
[08:42:54][Step 2/2] I181018 08:38:33.403608 66536 storage/store.go:3640 [s3,r1/11:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:33.406238 66737 storage/replica.go:863 [replicaGC,s3,r1/11:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:33.409254 66277 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot a8efa1cb at applied index 65
[08:42:54][Step 2/2] I181018 08:38:33.416895 66277 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 117, log entries: 55, rate-limit: 2.0 MiB/sec, 10ms
[08:42:54][Step 2/2] I181018 08:38:33.419346 66851 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 65 (id=a8efa1cb, encoded size=28805, 1 rocksdb batches, 55 log entries)
[08:42:54][Step 2/2] I181018 08:38:33.443030 66851 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 23ms [clear=0ms batch=0ms entries=21ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:33.446614 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):12): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=12, gen=0]
[08:42:54][Step 2/2] I181018 08:38:33.458539 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=f0a07e8d] proposing ADD_REPLICA((n3,s3):12): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):12] next=13
[08:42:54][Step 2/2] I181018 08:38:33.617217 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):12): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):12, next=13, gen=0]
[08:42:54][Step 2/2] I181018 08:38:33.630335 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=afec0e35] proposing REMOVE_REPLICA((n3,s3):12): updated=[(n1,s1):1 (n2,s2):2] next=13
[08:42:54][Step 2/2] I181018 08:38:33.638503 66536 storage/store.go:3640 [s3,r1/12:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:33.641652 66854 storage/store.go:2580 [replicaGC,s3,r1/12:/M{in-ax}] removing replica r1/12
[08:42:54][Step 2/2] I181018 08:38:33.642674 66536 storage/store.go:3640 [s3,r1/12:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:33.645625 66854 storage/replica.go:863 [replicaGC,s3,r1/12:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:33.646372 66277 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot c92bfdd5 at applied index 70
[08:42:54][Step 2/2] I181018 08:38:33.649136 66277 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 124, log entries: 60, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:33.651430 66745 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 70 (id=c92bfdd5, encoded size=30926, 1 rocksdb batches, 60 log entries)
[08:42:54][Step 2/2] I181018 08:38:33.669354 66745 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 18ms [clear=1ms batch=0ms entries=15ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:33.672342 66277 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):13): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=13, gen=0]
[08:42:54][Step 2/2] I181018 08:38:33.683733 66277 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b3eb4622] proposing ADD_REPLICA((n3,s3):13): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):13] next=14
[08:42:54][Step 2/2] I181018 08:38:33.688157 66664 gossip/gossip.go:1510 [n3] node has connected to cluster via gossip
[08:42:54][Step 2/2] W181018 08:38:33.742961 66484 storage/store.go:1490 [s2,r1/2:/M{in-ax}] could not gossip first range descriptor: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:33.781516 66534 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: rpc error: code = Unavailable desc = transport is closing:
[08:42:54][Step 2/2] --- PASS: TestRaftRemoveRace (2.40s)
[08:42:54][Step 2/2] === RUN TestRemovePlaceholderRace
[08:42:54][Step 2/2] I181018 08:38:33.839471 66799 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:32829" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:33.888228 66799 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:33.889233 66799 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:35955" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:33.893379 66780 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:32829
[08:42:54][Step 2/2] W181018 08:38:34.006268 66799 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:34.007011 66799 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:38889" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:34.011042 67153 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:32829
[08:42:54][Step 2/2] I181018 08:38:34.070040 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot ba9adb66 at applied index 16
[08:42:54][Step 2/2] I181018 08:38:34.071772 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:38:34.077303 66754 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=ba9adb66, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:34.081155 66754 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:34.085053 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:34.095053 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=d85bc486] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:34.117141 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 357df548 at applied index 18
[08:42:54][Step 2/2] I181018 08:38:34.118952 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 10ms
[08:42:54][Step 2/2] I181018 08:38:34.121846 67205 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=357df548, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:34.125986 67205 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:34.131100 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:34.154511 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=5b4ecde6] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:34.311179 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:34.345645 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=2b9f58d7] proposing REMOVE_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:34.358492 66972 storage/store.go:2580 [replicaGC,s2,r1/2:/M{in-ax}] removing replica r1/2
[08:42:54][Step 2/2] I181018 08:38:34.360527 66972 storage/replica.go:863 [replicaGC,s2,r1/2:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:34.365281 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot b9bc6cea at applied index 23
[08:42:54][Step 2/2] I181018 08:38:34.366893 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 59, log entries: 13, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:34.369725 66974 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 23 (id=b9bc6cea, encoded size=11399, 1 rocksdb batches, 13 log entries)
[08:42:54][Step 2/2] I181018 08:38:34.375988 66974 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=5ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:34.381982 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:34.383496 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:34.403855 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=7ff733b5] proposing ADD_REPLICA((n2,s2):4): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):4] next=5
[08:42:54][Step 2/2] I181018 08:38:34.422304 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):4, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:38:34.442768 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=5caa0cda] proposing REMOVE_REPLICA((n2,s2):4): updated=[(n1,s1):1 (n3,s3):3] next=5
[08:42:54][Step 2/2] I181018 08:38:34.455114 67209 storage/store.go:3640 [s2,r1/4:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:34.456267 66832 storage/store.go:2580 [replicaGC,s2,r1/4:/M{in-ax}] removing replica r1/4
[08:42:54][Step 2/2] I181018 08:38:34.456620 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 0f4891c7 at applied index 28
[08:42:54][Step 2/2] I181018 08:38:34.459251 66832 storage/replica.go:863 [replicaGC,s2,r1/4:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:34.459438 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 66, log entries: 18, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:34.462019 66976 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 28 (id=0f4891c7, encoded size=13612, 1 rocksdb batches, 18 log entries)
[08:42:54][Step 2/2] I181018 08:38:34.469478 66976 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=6ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:34.491463 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):5): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:38:34.493572 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:34.509199 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=30788ff2] proposing ADD_REPLICA((n2,s2):5): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):5] next=6
[08:42:54][Step 2/2] I181018 08:38:34.531176 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):5): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):5, next=6, gen=0]
[08:42:54][Step 2/2] I181018 08:38:34.548352 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=fcd9df2a] proposing REMOVE_REPLICA((n2,s2):5): updated=[(n1,s1):1 (n3,s3):3] next=6
[08:42:54][Step 2/2] I181018 08:38:34.561155 67209 storage/store.go:3640 [s2,r1/5:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:34.562655 66849 storage/store.go:2580 [replicaGC,s2,r1/5:/M{in-ax}] removing replica r1/5
[08:42:54][Step 2/2] I181018 08:38:34.564306 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 3731fca4 at applied index 33
[08:42:54][Step 2/2] I181018 08:38:34.564648 66849 storage/replica.go:863 [replicaGC,s2,r1/5:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:34.566401 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 73, log entries: 23, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:34.570715 67251 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 33 (id=3731fca4, encoded size=15735, 1 rocksdb batches, 23 log entries)
[08:42:54][Step 2/2] I181018 08:38:34.579732 67251 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 9ms [clear=0ms batch=0ms entries=7ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:34.582094 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:34.585441 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):6): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=6, gen=0]
[08:42:54][Step 2/2] I181018 08:38:34.607477 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=85bdcbb0] proposing ADD_REPLICA((n2,s2):6): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):6] next=7
[08:42:54][Step 2/2] I181018 08:38:34.623460 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):6): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):6, next=7, gen=0]
[08:42:54][Step 2/2] I181018 08:38:34.643985 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0bfe1a69] proposing REMOVE_REPLICA((n2,s2):6): updated=[(n1,s1):1 (n3,s3):3] next=7
[08:42:54][Step 2/2] I181018 08:38:34.654526 67209 storage/store.go:3640 [s2,r1/6:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:34.655653 67209 storage/store.go:3640 [s2,r1/6:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:34.657316 67067 storage/store.go:2580 [replicaGC,s2,r1/6:/M{in-ax}] removing replica r1/6
[08:42:54][Step 2/2] I181018 08:38:34.658925 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 008b4314 at applied index 37
[08:42:54][Step 2/2] I181018 08:38:34.659561 67067 storage/replica.go:863 [replicaGC,s2,r1/6:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:34.661453 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 79, log entries: 27, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:34.662764 67269 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 37 (id=008b4314, encoded size=17776, 1 rocksdb batches, 27 log entries)
[08:42:54][Step 2/2] I181018 08:38:34.671655 67269 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 9ms [clear=0ms batch=0ms entries=7ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:34.677459 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):7): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=7, gen=0]
[08:42:54][Step 2/2] I181018 08:38:34.679672 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:34.694708 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=7d510691] proposing ADD_REPLICA((n2,s2):7): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):7] next=8
[08:42:54][Step 2/2] I181018 08:38:34.706941 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):7): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):7, next=8, gen=0]
[08:42:54][Step 2/2] I181018 08:38:34.719743 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=20e5dbad] proposing REMOVE_REPLICA((n2,s2):7): updated=[(n1,s1):1 (n3,s3):3] next=8
[08:42:54][Step 2/2] I181018 08:38:34.734004 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 3f088b98 at applied index 41
[08:42:54][Step 2/2] I181018 08:38:34.735356 67209 storage/store.go:3640 [s2,r1/7:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:34.736379 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 85, log entries: 31, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:34.739530 67213 storage/store.go:2580 [replicaGC,s2,r1/7:/M{in-ax}] removing replica r1/7
[08:42:54][Step 2/2] I181018 08:38:34.741597 67213 storage/replica.go:863 [replicaGC,s2,r1/7:/M{in-ax}] removed 49 (43+6) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:34.742807 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):8): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=8, gen=0]
[08:42:54][Step 2/2] I181018 08:38:34.762154 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=74e4988f] proposing ADD_REPLICA((n2,s2):8): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):8] next=9
[08:42:54][Step 2/2] I181018 08:38:34.777812 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):8): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):8, next=9, gen=0]
[08:42:54][Step 2/2] I181018 08:38:34.796425 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=e8826e4a] proposing REMOVE_REPLICA((n2,s2):8): updated=[(n1,s1):1 (n3,s3):3] next=9
[08:42:54][Step 2/2] I181018 08:38:34.806913 67202 storage/store_snapshot.go:621 [raftsnapshot,s1,r1/1:/M{in-ax}] sending Raft snapshot 8ca58bff at applied index 45
[08:42:54][Step 2/2] I181018 08:38:34.809797 67202 storage/store_snapshot.go:664 [raftsnapshot,s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):8: kv pairs: 92, log entries: 35, rate-limit: 8.0 MiB/sec, 12ms
[08:42:54][Step 2/2] I181018 08:38:34.812557 67217 storage/replica_raftstorage.go:804 [s2,r1/8:{-}] applying Raft snapshot at index 45 (id=8ca58bff, encoded size=21605, 1 rocksdb batches, 35 log entries)
[08:42:54][Step 2/2] I181018 08:38:34.826493 67217 storage/replica_raftstorage.go:810 [s2,r1/8:/M{in-ax}] applied Raft snapshot in 14ms [clear=1ms batch=0ms entries=12ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:34.831868 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 7ee6b9e6 at applied index 46
[08:42:54][Step 2/2] I181018 08:38:34.834137 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 92, log entries: 36, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:34.837295 67285 storage/replica_raftstorage.go:804 [s2,r1/8:/M{in-ax}] applying preemptive snapshot at index 46 (id=7ee6b9e6, encoded size=21938, 1 rocksdb batches, 36 log entries)
[08:42:54][Step 2/2] I181018 08:38:34.841743 67209 storage/store.go:3640 [s2,r1/8:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:34.851857 67285 storage/replica_raftstorage.go:810 [s2,r1/8:/M{in-ax}] applied preemptive snapshot in 14ms [clear=1ms batch=0ms entries=12ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:34.852372 67242 storage/store.go:2580 [replicaGC,s2,r1/8:/M{in-ax}] removing replica r1/8
[08:42:54][Step 2/2] I181018 08:38:34.854708 67242 storage/replica.go:863 [replicaGC,s2,r1/8:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:34.857023 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):9): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=9, gen=0]
[08:42:54][Step 2/2] I181018 08:38:34.880463 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=333ae2d8] proposing ADD_REPLICA((n2,s2):9): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):9] next=10
[08:42:54][Step 2/2] I181018 08:38:34.893368 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):9): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):9, next=10, gen=0]
[08:42:54][Step 2/2] I181018 08:38:34.909059 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=c9b4ddd1] proposing REMOVE_REPLICA((n2,s2):9): updated=[(n1,s1):1 (n3,s3):3] next=10
[08:42:54][Step 2/2] I181018 08:38:34.913469 67070 storage/store_snapshot.go:621 [raftsnapshot,s1,r1/1:/M{in-ax}] sending Raft snapshot 31c5c90b at applied index 50
[08:42:54][Step 2/2] I181018 08:38:34.916003 67070 storage/store_snapshot.go:664 [raftsnapshot,s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):9: kv pairs: 99, log entries: 40, rate-limit: 8.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:34.918053 67315 storage/replica_raftstorage.go:804 [s2,r1/9:{-}] applying Raft snapshot at index 50 (id=31c5c90b, encoded size=23726, 1 rocksdb batches, 40 log entries)
[08:42:54][Step 2/2] I181018 08:38:34.940664 67315 storage/replica_raftstorage.go:810 [s2,r1/9:/M{in-ax}] applied Raft snapshot in 22ms [clear=0ms batch=0ms entries=20ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:34.947665 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 4558b433 at applied index 52
[08:42:54][Step 2/2] I181018 08:38:34.947812 67209 storage/store.go:3640 [s2,r1/9:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:34.950092 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 100, log entries: 42, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:34.952167 67263 storage/replica_raftstorage.go:804 [s2,r1/9:/M{in-ax}] applying preemptive snapshot at index 52 (id=4558b433, encoded size=24229, 1 rocksdb batches, 42 log entries)
[08:42:54][Step 2/2] I181018 08:38:34.974550 67263 storage/replica_raftstorage.go:810 [s2,r1/9:/M{in-ax}] applied preemptive snapshot in 22ms [clear=1ms batch=0ms entries=20ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:34.987211 67074 storage/store.go:2580 [replicaGC,s2,r1/9:/M{in-ax}] removing replica r1/9
[08:42:54][Step 2/2] I181018 08:38:34.989998 67074 storage/replica.go:863 [replicaGC,s2,r1/9:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:34.992822 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):10): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=10, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.005406 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=775c735f] proposing ADD_REPLICA((n2,s2):10): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):10] next=11
[08:42:54][Step 2/2] I181018 08:38:35.026565 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):10): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):10, next=11, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.050653 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=ea499994] proposing REMOVE_REPLICA((n2,s2):10): updated=[(n1,s1):1 (n3,s3):3] next=11
[08:42:54][Step 2/2] I181018 08:38:35.051908 67288 storage/store_snapshot.go:621 [raftsnapshot,s1,r1/1:/M{in-ax}] sending Raft snapshot ed2c17eb at applied index 55
[08:42:54][Step 2/2] I181018 08:38:35.054303 67288 storage/store_snapshot.go:664 [raftsnapshot,s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):10: kv pairs: 106, log entries: 45, rate-limit: 8.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:35.055668 67348 storage/replica_raftstorage.go:804 [s2,r1/10:{-}] applying Raft snapshot at index 55 (id=ed2c17eb, encoded size=25847, 1 rocksdb batches, 45 log entries)
[08:42:54][Step 2/2] I181018 08:38:35.073967 67348 storage/replica_raftstorage.go:810 [s2,r1/10:/M{in-ax}] applied Raft snapshot in 18ms [clear=1ms batch=0ms entries=16ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:35.078181 67209 storage/store.go:3640 [s2,r1/10:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:35.079183 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot ddca54c9 at applied index 56
[08:42:54][Step 2/2] I181018 08:38:35.081638 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 106, log entries: 46, rate-limit: 2.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:38:35.082556 67279 storage/store.go:2580 [replicaGC,s2,r1/10:/M{in-ax}] removing replica r1/10
[08:42:54][Step 2/2] I181018 08:38:35.085157 67279 storage/replica.go:863 [replicaGC,s2,r1/10:/M{in-ax}] removed 49 (43+6) keys in 2ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:35.092098 67350 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 56 (id=ddca54c9, encoded size=26180, 1 rocksdb batches, 46 log entries)
[08:42:54][Step 2/2] I181018 08:38:35.109594 67350 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 17ms [clear=1ms batch=0ms entries=15ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:35.114641 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):11): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=11, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.127711 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=d1e00318] proposing ADD_REPLICA((n2,s2):11): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):11] next=12
[08:42:54][Step 2/2] I181018 08:38:35.131256 67209 storage/store.go:3618 [s2,r1/?:/M{in-ax}] replica too old response with old replica ID: 10
[08:42:54][Step 2/2] I181018 08:38:35.139694 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):11): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):11, next=12, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.151475 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=5287e55f] proposing REMOVE_REPLICA((n2,s2):11): updated=[(n1,s1):1 (n3,s3):3] next=12
[08:42:54][Step 2/2] I181018 08:38:35.168945 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot cf9f8a5a at applied index 60
[08:42:54][Step 2/2] I181018 08:38:35.175602 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 112, log entries: 50, rate-limit: 2.0 MiB/sec, 10ms
[08:42:54][Step 2/2] I181018 08:38:35.176113 67209 storage/store.go:3640 [s2,r1/11:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:35.179151 67371 storage/store.go:2580 [replicaGC,s2,r1/11:/M{in-ax}] removing replica r1/11
[08:42:54][Step 2/2] I181018 08:38:35.181688 67371 storage/replica.go:863 [replicaGC,s2,r1/11:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:35.181741 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):12): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=12, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.192987 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=46c901c9] proposing ADD_REPLICA((n2,s2):12): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):12] next=13
[08:42:54][Step 2/2] I181018 08:38:35.216271 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):12): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):12, next=13, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.230301 67381 storage/store_snapshot.go:621 [raftsnapshot,s1,r1/1:/M{in-ax}] sending Raft snapshot 0cc50a25 at applied index 63
[08:42:54][Step 2/2] I181018 08:38:35.233062 67381 storage/store_snapshot.go:664 [raftsnapshot,s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):12: kv pairs: 116, log entries: 53, rate-limit: 8.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:35.234946 67291 storage/replica_raftstorage.go:804 [s2,r1/12:{-}] applying Raft snapshot at index 63 (id=0cc50a25, encoded size=29304, 1 rocksdb batches, 53 log entries)
[08:42:54][Step 2/2] I181018 08:38:35.235021 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=5b82a074] proposing REMOVE_REPLICA((n2,s2):12): updated=[(n1,s1):1 (n3,s3):3] next=13
[08:42:54][Step 2/2] I181018 08:38:35.259569 67291 storage/replica_raftstorage.go:810 [s2,r1/12:/M{in-ax}] applied Raft snapshot in 24ms [clear=1ms batch=0ms entries=22ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:35.264167 67209 storage/store.go:3640 [s2,r1/12:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:35.267149 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 6457affa at applied index 65
[08:42:54][Step 2/2] I181018 08:38:35.270130 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 119, log entries: 55, rate-limit: 2.0 MiB/sec, 12ms
[08:42:54][Step 2/2] I181018 08:38:35.270305 67333 storage/store.go:2580 [replicaGC,s2,r1/12:/M{in-ax}] removing replica r1/12
[08:42:54][Step 2/2] I181018 08:38:35.274549 67333 storage/replica.go:863 [replicaGC,s2,r1/12:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:35.280941 67306 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 65 (id=6457affa, encoded size=30252, 1 rocksdb batches, 55 log entries)
[08:42:54][Step 2/2] I181018 08:38:35.299604 67306 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 18ms [clear=1ms batch=0ms entries=16ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:35.301425 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:35.305004 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):13): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=13, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.321490 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=df97bafd] proposing ADD_REPLICA((n2,s2):13): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):13] next=14
[08:42:54][Step 2/2] I181018 08:38:35.332265 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):13): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):13, next=14, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.345376 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=f5a4c3d7] proposing REMOVE_REPLICA((n2,s2):13): updated=[(n1,s1):1 (n3,s3):3] next=14
[08:42:54][Step 2/2] I181018 08:38:35.357885 67209 storage/store.go:3640 [s2,r1/13:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:35.358369 67209 storage/store.go:3640 [s2,r1/13:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:35.360763 67384 storage/store.go:2580 [replicaGC,s2,r1/13:/M{in-ax}] removing replica r1/13
[08:42:54][Step 2/2] I181018 08:38:35.363322 67384 storage/replica.go:863 [replicaGC,s2,r1/13:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:35.365381 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 39c839b8 at applied index 70
[08:42:54][Step 2/2] I181018 08:38:35.368449 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 126, log entries: 60, rate-limit: 2.0 MiB/sec, 9ms
[08:42:54][Step 2/2] I181018 08:38:35.371096 67387 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 70 (id=39c839b8, encoded size=32463, 1 rocksdb batches, 60 log entries)
[08:42:54][Step 2/2] I181018 08:38:35.402357 67387 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 31ms [clear=1ms batch=0ms entries=28ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:35.404965 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:35.408127 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):14): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=14, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.425502 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=9f956393] proposing ADD_REPLICA((n2,s2):14): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):14] next=15
[08:42:54][Step 2/2] I181018 08:38:35.436301 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):14): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):14, next=15, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.448051 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=abe50e9e] proposing REMOVE_REPLICA((n2,s2):14): updated=[(n1,s1):1 (n3,s3):3] next=15
[08:42:54][Step 2/2] I181018 08:38:35.463663 67209 storage/store.go:3640 [s2,r1/14:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:35.468497 67209 storage/store.go:3640 [s2,r1/14:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:35.472061 67378 storage/store.go:2580 [replicaGC,s2,r1/14:/M{in-ax}] removing replica r1/14
[08:42:54][Step 2/2] I181018 08:38:35.474337 67378 storage/replica.go:863 [replicaGC,s2,r1/14:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:35.479795 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot ac51bd80 at applied index 75
[08:42:54][Step 2/2] I181018 08:38:35.482398 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 133, log entries: 65, rate-limit: 2.0 MiB/sec, 11ms
[08:42:54][Step 2/2] I181018 08:38:35.485320 67389 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 75 (id=ac51bd80, encoded size=34674, 1 rocksdb batches, 65 log entries)
[08:42:54][Step 2/2] I181018 08:38:35.508097 67389 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 23ms [clear=1ms batch=0ms entries=20ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:35.514090 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):15): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=15, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.527919 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=79790e22] proposing ADD_REPLICA((n2,s2):15): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):15] next=16
[08:42:54][Step 2/2] I181018 08:38:35.541271 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):15): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):15, next=16, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.555991 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=9ee52475] proposing REMOVE_REPLICA((n2,s2):15): updated=[(n1,s1):1 (n3,s3):3] next=16
[08:42:54][Step 2/2] I181018 08:38:35.569720 67209 storage/store.go:3640 [s2,r1/15:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:35.570223 67209 storage/store.go:3640 [s2,r1/15:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:35.571442 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 52fc24ba at applied index 79
[08:42:54][Step 2/2] I181018 08:38:35.571850 67308 storage/store.go:2580 [replicaGC,s2,r1/15:/M{in-ax}] removing replica r1/15
[08:42:54][Step 2/2] I181018 08:38:35.574586 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 139, log entries: 69, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:35.574773 67308 storage/replica.go:863 [replicaGC,s2,r1/15:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:35.577142 67392 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 79 (id=52fc24ba, encoded size=36625, 1 rocksdb batches, 69 log entries)
[08:42:54][Step 2/2] I181018 08:38:35.596365 67392 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 19ms [clear=1ms batch=0ms entries=16ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:35.600913 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):16): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=16, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.614372 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=5751887c] proposing ADD_REPLICA((n2,s2):16): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):16] next=17
[08:42:54][Step 2/2] I181018 08:38:35.626406 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):16): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):16, next=17, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.646987 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=47f1c8d5] proposing REMOVE_REPLICA((n2,s2):16): updated=[(n1,s1):1 (n3,s3):3] next=17
[08:42:54][Step 2/2] I181018 08:38:35.662394 67209 storage/store.go:3640 [s2,r1/16:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:35.664454 67209 storage/store.go:3640 [s2,r1/16:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:35.664639 67394 storage/store.go:2580 [replicaGC,s2,r1/16:/M{in-ax}] removing replica r1/16
[08:42:54][Step 2/2] I181018 08:38:35.664920 66778 gossip/gossip.go:1510 [n2] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:38:35.664963 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 8b1dd492 at applied index 83
[08:42:54][Step 2/2] I181018 08:38:35.668628 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 145, log entries: 73, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:35.669203 67394 storage/replica.go:863 [replicaGC,s2,r1/16:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:35.674746 67325 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 83 (id=8b1dd492, encoded size=38576, 1 rocksdb batches, 73 log entries)
[08:42:54][Step 2/2] I181018 08:38:35.702644 67325 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 28ms [clear=1ms batch=0ms entries=25ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:35.705184 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:35.711831 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):17): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=17, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.745171 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=2f94b572] proposing ADD_REPLICA((n2,s2):17): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):17] next=18
[08:42:54][Step 2/2] I181018 08:38:35.776902 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):17): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):17, next=18, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.792072 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=518dfd34] proposing REMOVE_REPLICA((n2,s2):17): updated=[(n1,s1):1 (n3,s3):3] next=18
[08:42:54][Step 2/2] I181018 08:38:35.806430 67209 storage/store.go:3640 [s2,r1/17:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:35.809404 67341 storage/store.go:2580 [replicaGC,s2,r1/17:/M{in-ax}] removing replica r1/17
[08:42:54][Step 2/2] I181018 08:38:35.810492 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 0258d248 at applied index 89
[08:42:54][Step 2/2] I181018 08:38:35.812732 67341 storage/replica.go:863 [replicaGC,s2,r1/17:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:35.814707 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 153, log entries: 79, rate-limit: 2.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:38:35.818449 67343 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 89 (id=0258d248, encoded size=40957, 1 rocksdb batches, 79 log entries)
[08:42:54][Step 2/2] I181018 08:38:35.849648 67343 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 31ms [clear=1ms batch=0ms entries=28ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:35.855238 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):18): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=18, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.872273 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=84f8d9c7] proposing ADD_REPLICA((n2,s2):18): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):18] next=19
[08:42:54][Step 2/2] I181018 08:38:35.884075 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):18): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):18, next=19, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.896794 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=95a03ab6] proposing REMOVE_REPLICA((n2,s2):18): updated=[(n1,s1):1 (n3,s3):3] next=19
[08:42:54][Step 2/2] I181018 08:38:35.912855 67209 storage/store.go:3640 [s2,r1/18:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:35.913284 67209 storage/store.go:3640 [s2,r1/18:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:35.914835 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 6988c8d0 at applied index 94
[08:42:54][Step 2/2] I181018 08:38:35.915386 67359 storage/store.go:2580 [replicaGC,s2,r1/18:/M{in-ax}] removing replica r1/18
[08:42:54][Step 2/2] I181018 08:38:35.918366 67359 storage/replica.go:863 [replicaGC,s2,r1/18:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:35.918404 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 160, log entries: 84, rate-limit: 2.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:38:35.921186 67249 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 94 (id=6988c8d0, encoded size=43078, 1 rocksdb batches, 84 log entries)
[08:42:54][Step 2/2] I181018 08:38:35.948428 67249 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 27ms [clear=1ms batch=0ms entries=24ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:35.954119 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):19): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=19, gen=0]
[08:42:54][Step 2/2] I181018 08:38:35.973166 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=4271ccb7] proposing ADD_REPLICA((n2,s2):19): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):19] next=20
[08:42:54][Step 2/2] I181018 08:38:35.988319 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):19): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):19, next=20, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.019634 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=13aa41ad] proposing REMOVE_REPLICA((n2,s2):19): updated=[(n1,s1):1 (n3,s3):3] next=20
[08:42:54][Step 2/2] I181018 08:38:36.035304 67151 gossip/gossip.go:1510 [n3] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:38:36.038920 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot fb766e33 at applied index 98
[08:42:54][Step 2/2] I181018 08:38:36.040915 67209 storage/store.go:3640 [s2,r1/19:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:36.042914 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 166, log entries: 88, rate-limit: 2.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:38:36.048758 67361 storage/store.go:2580 [replicaGC,s2,r1/19:/M{in-ax}] removing replica r1/19
[08:42:54][Step 2/2] I181018 08:38:36.051430 67361 storage/replica.go:863 [replicaGC,s2,r1/19:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:36.053230 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):20): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=20, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.066207 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=fcc26d94] proposing ADD_REPLICA((n2,s2):20): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):20] next=21
[08:42:54][Step 2/2] I181018 08:38:36.073896 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):20): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):20, next=21, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.086156 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:36.108505 67250 storage/store_snapshot.go:621 [raftsnapshot,s1,r1/1:/M{in-ax}] sending Raft snapshot 0697ee20 at applied index 100
[08:42:54][Step 2/2] I181018 08:38:36.112417 67250 storage/store_snapshot.go:664 [raftsnapshot,s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):20: kv pairs: 169, log entries: 90, rate-limit: 8.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:38:36.115054 67415 storage/replica_raftstorage.go:804 [s2,r1/20:{-}] applying Raft snapshot at index 100 (id=0697ee20, encoded size=46036, 1 rocksdb batches, 90 log entries)
[08:42:54][Step 2/2] I181018 08:38:36.118435 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=31fd521c] proposing REMOVE_REPLICA((n2,s2):20): updated=[(n1,s1):1 (n3,s3):3] next=21
[08:42:54][Step 2/2] I181018 08:38:36.166143 67415 storage/replica_raftstorage.go:810 [s2,r1/20:/M{in-ax}] applied Raft snapshot in 51ms [clear=4ms batch=0ms entries=45ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:36.175873 67209 storage/store.go:3640 [s2,r1/20:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:36.176855 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 352cb7c3 at applied index 104
[08:42:54][Step 2/2] I181018 08:38:36.178874 67416 storage/store.go:2580 [replicaGC,s2,r1/20:/M{in-ax}] removing replica r1/20
[08:42:54][Step 2/2] I181018 08:38:36.180260 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 174, log entries: 94, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:36.181518 67416 storage/replica.go:863 [replicaGC,s2,r1/20:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:36.184992 67346 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 104 (id=352cb7c3, encoded size=47321, 1 rocksdb batches, 94 log entries)
[08:42:54][Step 2/2] I181018 08:38:36.221721 67346 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 36ms [clear=1ms batch=0ms entries=33ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:36.226549 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):21): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=21, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.239335 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=ce7d3cbe] proposing ADD_REPLICA((n2,s2):21): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):21] next=22
[08:42:54][Step 2/2] I181018 08:38:36.253148 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):21): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):21, next=22, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.275248 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=227a4cfb] proposing REMOVE_REPLICA((n2,s2):21): updated=[(n1,s1):1 (n3,s3):3] next=22
[08:42:54][Step 2/2] I181018 08:38:36.290168 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot e5cf4b83 at applied index 108
[08:42:54][Step 2/2] I181018 08:38:36.292243 67510 storage/store.go:2580 [replicaGC,s2,r1/21:/M{in-ax}] removing replica r1/21
[08:42:54][Step 2/2] I181018 08:38:36.292711 67209 storage/store.go:3640 [s2,r1/21:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:36.293794 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 180, log entries: 98, rate-limit: 2.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:38:36.301946 67510 storage/replica.go:863 [replicaGC,s2,r1/21:/M{in-ax}] removed 48 (43+5) keys in 8ms [clear=2ms commit=6ms]
[08:42:54][Step 2/2] I181018 08:38:36.310666 67453 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 108 (id=e5cf4b83, encoded size=49272, 1 rocksdb batches, 98 log entries)
[08:42:54][Step 2/2] I181018 08:38:36.348384 67453 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 37ms [clear=3ms batch=0ms entries=33ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:36.353819 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):22): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=22, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.374164 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=11f1f33e] proposing ADD_REPLICA((n2,s2):22): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):22] next=23
[08:42:54][Step 2/2] I181018 08:38:36.391262 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):22): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):22, next=23, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.417467 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=e5bea646] proposing REMOVE_REPLICA((n2,s2):22): updated=[(n1,s1):1 (n3,s3):3] next=23
[08:42:54][Step 2/2] I181018 08:38:36.432777 67209 storage/store.go:3640 [s2,r1/22:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:36.433283 67407 storage/store.go:2580 [replicaGC,s2,r1/22:/M{in-ax}] removing replica r1/22
[08:42:54][Step 2/2] I181018 08:38:36.435955 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 49156623 at applied index 113
[08:42:54][Step 2/2] I181018 08:38:36.436153 67407 storage/replica.go:863 [replicaGC,s2,r1/22:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:36.439439 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 187, log entries: 103, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:36.444036 67419 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 113 (id=49156623, encoded size=51393, 1 rocksdb batches, 103 log entries)
[08:42:54][Step 2/2] I181018 08:38:36.474915 67419 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 31ms [clear=1ms batch=0ms entries=28ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:36.479601 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):23): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=23, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.492245 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b2a48b62] proposing ADD_REPLICA((n2,s2):23): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):23] next=24
[08:42:54][Step 2/2] I181018 08:38:36.503940 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):23): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):23, next=24, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.517892 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=3f1da0c9] proposing REMOVE_REPLICA((n2,s2):23): updated=[(n1,s1):1 (n3,s3):3] next=24
[08:42:54][Step 2/2] I181018 08:38:36.526326 67209 storage/store.go:3640 [s2,r1/23:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:36.526805 67209 storage/store.go:3640 [s2,r1/23:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:36.528944 67458 storage/store.go:2580 [replicaGC,s2,r1/23:/M{in-ax}] removing replica r1/23
[08:42:54][Step 2/2] I181018 08:38:36.530933 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 84a6c3cf at applied index 117
[08:42:54][Step 2/2] I181018 08:38:36.531379 67458 storage/replica.go:863 [replicaGC,s2,r1/23:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:36.533841 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 193, log entries: 107, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:36.538482 67409 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 117 (id=84a6c3cf, encoded size=53344, 1 rocksdb batches, 107 log entries)
[08:42:54][Step 2/2] I181018 08:38:36.579820 67409 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 41ms [clear=2ms batch=0ms entries=37ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:36.583709 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):24): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=24, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.600831 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=12833188] proposing ADD_REPLICA((n2,s2):24): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):24] next=25
[08:42:54][Step 2/2] I181018 08:38:36.623597 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):24): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):24, next=25, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.642137 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=397adb0d] proposing REMOVE_REPLICA((n2,s2):24): updated=[(n1,s1):1 (n3,s3):3] next=25
[08:42:54][Step 2/2] I181018 08:38:36.665005 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot d4ad3f5c at applied index 123
[08:42:54][Step 2/2] I181018 08:38:36.667961 67209 storage/store.go:3640 [s2,r1/24:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:36.668837 67209 storage/store.go:3640 [s2,r1/24:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:36.669710 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 201, log entries: 113, rate-limit: 2.0 MiB/sec, 17ms
[08:42:54][Step 2/2] I181018 08:38:36.670209 67474 storage/store.go:2580 [replicaGC,s2,r1/24:/M{in-ax}] removing replica r1/24
[08:42:54][Step 2/2] I181018 08:38:36.672168 67474 storage/replica.go:863 [replicaGC,s2,r1/24:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:36.675988 67531 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 123 (id=d4ad3f5c, encoded size=55635, 1 rocksdb batches, 113 log entries)
[08:42:54][Step 2/2] I181018 08:38:36.715328 67531 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 39ms [clear=1ms batch=0ms entries=36ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:36.722097 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:36.726972 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):25): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=25, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.766111 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=aec84952] proposing ADD_REPLICA((n2,s2):25): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):25] next=26
[08:42:54][Step 2/2] I181018 08:38:36.778826 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):25): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):25, next=26, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.795838 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b7cb3ba7] proposing REMOVE_REPLICA((n2,s2):25): updated=[(n1,s1):1 (n3,s3):3] next=26
[08:42:54][Step 2/2] I181018 08:38:36.807935 67209 storage/store.go:3640 [s2,r1/25:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:36.809140 67534 storage/store.go:2580 [replicaGC,s2,r1/25:/M{in-ax}] removing replica r1/25
[08:42:54][Step 2/2] I181018 08:38:36.809883 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot a446df04 at applied index 128
[08:42:54][Step 2/2] I181018 08:38:36.812283 67534 storage/replica.go:863 [replicaGC,s2,r1/25:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:36.814405 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 208, log entries: 118, rate-limit: 2.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:38:36.820932 67537 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 128 (id=a446df04, encoded size=57848, 1 rocksdb batches, 118 log entries)
[08:42:54][Step 2/2] I181018 08:38:36.857286 67537 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 36ms [clear=2ms batch=0ms entries=32ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:36.861778 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):26): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=26, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.872740 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6363ed60] proposing ADD_REPLICA((n2,s2):26): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):26] next=27
[08:42:54][Step 2/2] I181018 08:38:36.884626 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):26): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):26, next=27, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.898248 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=ef2d42fe] proposing REMOVE_REPLICA((n2,s2):26): updated=[(n1,s1):1 (n3,s3):3] next=27
[08:42:54][Step 2/2] I181018 08:38:36.909426 67209 storage/store.go:3640 [s2,r1/26:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:36.909887 67209 storage/store.go:3640 [s2,r1/26:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:36.911236 67572 storage/store.go:2580 [replicaGC,s2,r1/26:/M{in-ax}] removing replica r1/26
[08:42:54][Step 2/2] I181018 08:38:36.912660 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot f6e1a6fd at applied index 132
[08:42:54][Step 2/2] I181018 08:38:36.914970 67572 storage/replica.go:863 [replicaGC,s2,r1/26:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:36.916875 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 214, log entries: 122, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:36.920422 67424 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 132 (id=f6e1a6fd, encoded size=59803, 1 rocksdb batches, 122 log entries)
[08:42:54][Step 2/2] I181018 08:38:36.953538 67424 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 33ms [clear=2ms batch=0ms entries=29ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:36.958671 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):27): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=27, gen=0]
[08:42:54][Step 2/2] I181018 08:38:36.973794 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=3cb156a5] proposing ADD_REPLICA((n2,s2):27): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):27] next=28
[08:42:54][Step 2/2] I181018 08:38:36.988597 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):27): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):27, next=28, gen=0]
[08:42:54][Step 2/2] I181018 08:38:37.009741 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=47b3616e] proposing REMOVE_REPLICA((n2,s2):27): updated=[(n1,s1):1 (n3,s3):3] next=28
[08:42:54][Step 2/2] I181018 08:38:37.020776 67209 storage/store.go:3640 [s2,r1/27:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:37.042173 67517 storage/store.go:2580 [replicaGC,s2,r1/27:/M{in-ax}] removing replica r1/27
[08:42:54][Step 2/2] I181018 08:38:37.043821 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 94bd2ca7 at applied index 136
[08:42:54][Step 2/2] I181018 08:38:37.051742 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 220, log entries: 126, rate-limit: 2.0 MiB/sec, 19ms
[08:42:54][Step 2/2] I181018 08:38:37.055182 67517 storage/replica.go:863 [replicaGC,s2,r1/27:/M{in-ax}] removed 48 (43+5) keys in 6ms [clear=5ms commit=0ms]
[08:42:54][Step 2/2] E181018 08:38:37.062288 67578 storage/queue.go:791 [replicaGC,s2,r1/?:{-}] cannot process uninitialized replica
[08:42:54][Step 2/2] I181018 08:38:37.062748 67603 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 136 (id=94bd2ca7, encoded size=61758, 1 rocksdb batches, 126 log entries)
[08:42:54][Step 2/2] I181018 08:38:37.103630 67603 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 41ms [clear=2ms batch=0ms entries=35ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:37.108375 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):28): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=28, gen=0]
[08:42:54][Step 2/2] I181018 08:38:37.118871 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=69106895] proposing ADD_REPLICA((n2,s2):28): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):28] next=29
[08:42:54][Step 2/2] I181018 08:38:37.131911 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):28): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):28, next=29, gen=0]
[08:42:54][Step 2/2] I181018 08:38:37.146153 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=bc84e2f3] proposing REMOVE_REPLICA((n2,s2):28): updated=[(n1,s1):1 (n3,s3):3] next=29
[08:42:54][Step 2/2] I181018 08:38:37.162940 67209 storage/store.go:3640 [s2,r1/28:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:37.164280 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 74867bb1 at applied index 142
[08:42:54][Step 2/2] I181018 08:38:37.167246 67209 storage/store.go:3640 [s2,r1/28:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:37.168228 67604 storage/store.go:2580 [replicaGC,s2,r1/28:/M{in-ax}] removing replica r1/28
[08:42:54][Step 2/2] I181018 08:38:37.169203 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 228, log entries: 132, rate-limit: 2.0 MiB/sec, 13ms
[08:42:54][Step 2/2] I181018 08:38:37.171011 67604 storage/replica.go:863 [replicaGC,s2,r1/28:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:37.176217 67482 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 142 (id=74867bb1, encoded size=64058, 1 rocksdb batches, 132 log entries)
[08:42:54][Step 2/2] I181018 08:38:37.227427 67482 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 51ms [clear=2ms batch=0ms entries=46ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:37.229384 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:37.233287 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):29): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=29, gen=0]
[08:42:54][Step 2/2] I181018 08:38:37.250227 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=2f7b6924] proposing ADD_REPLICA((n2,s2):29): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):29] next=30
[08:42:54][Step 2/2] I181018 08:38:37.261971 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):29): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):29, next=30, gen=0]
[08:42:54][Step 2/2] I181018 08:38:37.281856 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=f7575ea2] proposing REMOVE_REPLICA((n2,s2):29): updated=[(n1,s1):1 (n3,s3):3] next=30
[08:42:54][Step 2/2] I181018 08:38:37.298267 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 0ad259fd at applied index 147
[08:42:54][Step 2/2] I181018 08:38:37.300230 67209 storage/store.go:3640 [s2,r1/29:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:37.302316 67547 storage/store.go:2580 [replicaGC,s2,r1/29:/M{in-ax}] removing replica r1/29
[08:42:54][Step 2/2] I181018 08:38:37.304257 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 235, log entries: 137, rate-limit: 2.0 MiB/sec, 10ms
[08:42:54][Step 2/2] I181018 08:38:37.304561 67547 storage/replica.go:863 [replicaGC,s2,r1/29:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:37.309734 67582 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 147 (id=0ad259fd, encoded size=66279, 1 rocksdb batches, 137 log entries)
[08:42:54][Step 2/2] I181018 08:38:37.355919 67582 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 46ms [clear=2ms batch=0ms entries=41ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:37.360162 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):30): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=30, gen=0]
[08:42:54][Step 2/2] I181018 08:38:37.370455 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=aeebdae8] proposing ADD_REPLICA((n2,s2):30): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):30] next=31
[08:42:54][Step 2/2] I181018 08:38:37.380072 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):30): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):30, next=31, gen=0]
[08:42:54][Step 2/2] I181018 08:38:37.390139 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=f8c855b4] proposing REMOVE_REPLICA((n2,s2):30): updated=[(n1,s1):1 (n3,s3):3] next=31
[08:42:54][Step 2/2] I181018 08:38:37.401998 67209 storage/store.go:3640 [s2,r1/30:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:37.403806 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 748333ed at applied index 151
[08:42:54][Step 2/2] I181018 08:38:37.404962 67563 storage/store.go:2580 [replicaGC,s2,r1/30:/M{in-ax}] removing replica r1/30
[08:42:54][Step 2/2] I181018 08:38:37.407358 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 241, log entries: 141, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:37.407778 67563 storage/replica.go:863 [replicaGC,s2,r1/30:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:37.411643 67584 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 151 (id=748333ed, encoded size=68238, 1 rocksdb batches, 141 log entries)
[08:42:54][Step 2/2] I181018 08:38:37.451134 67584 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 39ms [clear=2ms batch=0ms entries=35ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:37.456107 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):31): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=31, gen=0]
[08:42:54][Step 2/2] I181018 08:38:37.471146 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=ab4b91bb] proposing ADD_REPLICA((n2,s2):31): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):31] next=32
[08:42:54][Step 2/2] I181018 08:38:37.481969 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):31): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):31, next=32, gen=0]
[08:42:54][Step 2/2] I181018 08:38:37.501523 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=20be1b3d] proposing REMOVE_REPLICA((n2,s2):31): updated=[(n1,s1):1 (n3,s3):3] next=32
[08:42:54][Step 2/2] I181018 08:38:37.533773 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot d7755845 at applied index 156
[08:42:54][Step 2/2] I181018 08:38:37.538064 67567 storage/store.go:2580 [replicaGC,s2,r1/31:/M{in-ax}] removing replica r1/31
[08:42:54][Step 2/2] I181018 08:38:37.538864 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 248, log entries: 146, rate-limit: 2.0 MiB/sec, 9ms
[08:42:54][Step 2/2] I181018 08:38:37.539728 67209 storage/store.go:3640 [s2,r1/31:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:37.541231 67567 storage/replica.go:863 [replicaGC,s2,r1/31:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:37.545951 67590 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 156 (id=d7755845, encoded size=70369, 1 rocksdb batches, 146 log entries)
[08:42:54][Step 2/2] I181018 08:38:37.600838 67590 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 55ms [clear=2ms batch=0ms entries=50ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:37.605426 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):32): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=32, gen=0]
[08:42:54][Step 2/2] I181018 08:38:37.618744 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b0f4cebf] proposing ADD_REPLICA((n2,s2):32): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):32] next=33
[08:42:54][Step 2/2] I181018 08:38:37.632488 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):32): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):32, next=33, gen=0]
[08:42:54][Step 2/2] I181018 08:38:37.649822 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=13d32109] proposing REMOVE_REPLICA((n2,s2):32): updated=[(n1,s1):1 (n3,s3):3] next=33
[08:42:54][Step 2/2] I181018 08:38:37.667306 67209 storage/store.go:3640 [s2,r1/32:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:37.667883 67209 storage/store.go:3640 [s2,r1/32:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:37.669446 67651 storage/store.go:2580 [replicaGC,s2,r1/32:/M{in-ax}] removing replica r1/32
[08:42:54][Step 2/2] I181018 08:38:37.671517 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 1fce6efc at applied index 162
[08:42:54][Step 2/2] I181018 08:38:37.672466 67651 storage/replica.go:863 [replicaGC,s2,r1/32:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:37.676153 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 256, log entries: 152, rate-limit: 2.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:38:37.681347 67520 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 162 (id=1fce6efc, encoded size=72672, 1 rocksdb batches, 152 log entries)
[08:42:54][Step 2/2] I181018 08:38:37.738935 67520 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 57ms [clear=2ms batch=0ms entries=53ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:37.747491 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:37.751168 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):33): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=33, gen=0]
[08:42:54][Step 2/2] I181018 08:38:37.766990 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=23514202] proposing ADD_REPLICA((n2,s2):33): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):33] next=34
[08:42:54][Step 2/2] I181018 08:38:37.785544 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):33): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):33, next=34, gen=0]
[08:42:54][Step 2/2] I181018 08:38:37.801215 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=f2f6d583] proposing REMOVE_REPLICA((n2,s2):33): updated=[(n1,s1):1 (n3,s3):3] next=34
[08:42:54][Step 2/2] I181018 08:38:37.811494 67209 storage/store.go:3640 [s2,r1/33:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:37.813970 67209 storage/store.go:3640 [s2,r1/33:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:37.815085 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot e39ca32c at applied index 166
[08:42:54][Step 2/2] I181018 08:38:37.815122 67625 storage/store.go:2580 [replicaGC,s2,r1/33:/M{in-ax}] removing replica r1/33
[08:42:54][Step 2/2] I181018 08:38:37.821852 67625 storage/replica.go:863 [replicaGC,s2,r1/33:/M{in-ax}] removed 48 (43+5) keys in 5ms [clear=5ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:37.822680 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 262, log entries: 156, rate-limit: 2.0 MiB/sec, 10ms
[08:42:54][Step 2/2] I181018 08:38:37.828562 67613 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 166 (id=e39ca32c, encoded size=74721, 1 rocksdb batches, 156 log entries)
[08:42:54][Step 2/2] I181018 08:38:37.883088 67613 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 54ms [clear=3ms batch=0ms entries=49ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:37.888045 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):34): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=34, gen=0]
[08:42:54][Step 2/2] I181018 08:38:37.899615 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=650a70b4] proposing ADD_REPLICA((n2,s2):34): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):34] next=35
[08:42:54][Step 2/2] I181018 08:38:37.911633 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):34): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):34, next=35, gen=0]
[08:42:54][Step 2/2] I181018 08:38:37.931555 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=58e3c5d7] proposing REMOVE_REPLICA((n2,s2):34): updated=[(n1,s1):1 (n3,s3):3] next=35
[08:42:54][Step 2/2] I181018 08:38:37.941636 67209 storage/store.go:3640 [s2,r1/34:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:37.946675 67685 storage/store.go:2580 [replicaGC,s2,r1/34:/M{in-ax}] removing replica r1/34
[08:42:54][Step 2/2] I181018 08:38:37.946723 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 7cefaedc at applied index 171
[08:42:54][Step 2/2] I181018 08:38:37.952374 67685 storage/replica.go:863 [replicaGC,s2,r1/34:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:37.960161 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 269, log entries: 161, rate-limit: 2.0 MiB/sec, 16ms
[08:42:54][Step 2/2] I181018 08:38:37.977551 67644 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 171 (id=7cefaedc, encoded size=76852, 1 rocksdb batches, 161 log entries)
[08:42:54][Step 2/2] I181018 08:38:38.044073 67644 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 66ms [clear=11ms batch=0ms entries=53ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:38.046020 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:38.049976 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):35): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=35, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.066799 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=d9dabc76] proposing ADD_REPLICA((n2,s2):35): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):35] next=36
[08:42:54][Step 2/2] I181018 08:38:38.076477 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):35): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):35, next=36, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.114975 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=f688c215] proposing REMOVE_REPLICA((n2,s2):35): updated=[(n1,s1):1 (n3,s3):3] next=36
[08:42:54][Step 2/2] I181018 08:38:38.132882 67209 storage/store.go:3640 [s2,r1/35:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:38.136073 67675 storage/store.go:2580 [replicaGC,s2,r1/35:/M{in-ax}] removing replica r1/35
[08:42:54][Step 2/2] I181018 08:38:38.137897 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot d4026930 at applied index 177
[08:42:54][Step 2/2] I181018 08:38:38.139177 67675 storage/replica.go:863 [replicaGC,s2,r1/35:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:38.143535 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 277, log entries: 167, rate-limit: 2.0 MiB/sec, 10ms
[08:42:54][Step 2/2] I181018 08:38:38.149414 67593 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 177 (id=d4026930, encoded size=79245, 1 rocksdb batches, 167 log entries)
[08:42:54][Step 2/2] I181018 08:38:38.201242 67593 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 52ms [clear=2ms batch=0ms entries=47ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:38.206087 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):36): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=36, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.221207 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=e9dc4452] proposing ADD_REPLICA((n2,s2):36): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):36] next=37
[08:42:54][Step 2/2] I181018 08:38:38.246517 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):36): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):36, next=37, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.261207 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=63e0920a] proposing REMOVE_REPLICA((n2,s2):36): updated=[(n1,s1):1 (n3,s3):3] next=37
[08:42:54][Step 2/2] I181018 08:38:38.287632 67209 storage/store.go:3640 [s2,r1/36:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:38.288098 67209 storage/store.go:3640 [s2,r1/36:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:38.288375 67209 storage/store.go:3640 [s2,r1/36:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:38.288702 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 333459ed at applied index 182
[08:42:54][Step 2/2] I181018 08:38:38.289446 67594 storage/store.go:2580 [replicaGC,s2,r1/36:/M{in-ax}] removing replica r1/36
[08:42:54][Step 2/2] I181018 08:38:38.292595 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 283, log entries: 4, rate-limit: 2.0 MiB/sec, 14ms
[08:42:54][Step 2/2] I181018 08:38:38.293185 67594 storage/replica.go:863 [replicaGC,s2,r1/36:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:38.298700 67648 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 182 (id=333459ed, encoded size=22270, 1 rocksdb batches, 4 log entries)
[08:42:54][Step 2/2] I181018 08:38:38.305402 67648 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 6ms [clear=3ms batch=0ms entries=2ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:38.309369 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):37): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=37, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.326486 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0ce0bc88] proposing ADD_REPLICA((n2,s2):37): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):37] next=38
[08:42:54][Step 2/2] I181018 08:38:38.339716 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):37): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):37, next=38, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.366379 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=591ed415] proposing REMOVE_REPLICA((n2,s2):37): updated=[(n1,s1):1 (n3,s3):3] next=38
[08:42:54][Step 2/2] I181018 08:38:38.380197 67209 storage/store.go:3640 [s2,r1/37:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:38.382266 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot f5e55518 at applied index 187
[08:42:54][Step 2/2] I181018 08:38:38.382681 67659 storage/store.go:2580 [replicaGC,s2,r1/37:/M{in-ax}] removing replica r1/37
[08:42:54][Step 2/2] I181018 08:38:38.386343 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 290, log entries: 9, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:38.388517 67659 storage/replica.go:863 [replicaGC,s2,r1/37:/M{in-ax}] removed 48 (43+5) keys in 5ms [clear=5ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:38.393070 67716 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 187 (id=f5e55518, encoded size=24401, 1 rocksdb batches, 9 log entries)
[08:42:54][Step 2/2] I181018 08:38:38.400473 67716 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 7ms [clear=2ms batch=0ms entries=3ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:38.404892 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):38): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=38, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.418635 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=e6b97869] proposing ADD_REPLICA((n2,s2):38): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):38] next=39
[08:42:54][Step 2/2] I181018 08:38:38.432497 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):38): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):38, next=39, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.444443 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=7617ad23] proposing REMOVE_REPLICA((n2,s2):38): updated=[(n1,s1):1 (n3,s3):3] next=39
[08:42:54][Step 2/2] I181018 08:38:38.455732 67209 storage/store.go:3640 [s2,r1/38:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:38.456266 67209 storage/store.go:3640 [s2,r1/38:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:38.456929 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 26599419 at applied index 192
[08:42:54][Step 2/2] I181018 08:38:38.458882 67677 storage/store.go:2580 [replicaGC,s2,r1/38:/M{in-ax}] removing replica r1/38
[08:42:54][Step 2/2] I181018 08:38:38.463105 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 297, log entries: 14, rate-limit: 2.0 MiB/sec, 9ms
[08:42:54][Step 2/2] I181018 08:38:38.463381 67677 storage/replica.go:863 [replicaGC,s2,r1/38:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:38.472089 67632 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 192 (id=26599419, encoded size=26532, 1 rocksdb batches, 14 log entries)
[08:42:54][Step 2/2] I181018 08:38:38.481308 67632 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 9ms [clear=3ms batch=0ms entries=4ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:38.484088 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:38.489219 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):39): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=39, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.510293 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=46f81314] proposing ADD_REPLICA((n2,s2):39): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):39] next=40
[08:42:54][Step 2/2] I181018 08:38:38.523349 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):39): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):39, next=40, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.546780 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=580bd513] proposing REMOVE_REPLICA((n2,s2):39): updated=[(n1,s1):1 (n3,s3):3] next=40
[08:42:54][Step 2/2] I181018 08:38:38.559559 67209 storage/store.go:3640 [s2,r1/39:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:38.562062 67717 storage/store.go:2580 [replicaGC,s2,r1/39:/M{in-ax}] removing replica r1/39
[08:42:54][Step 2/2] I181018 08:38:38.564341 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 1e5af404 at applied index 197
[08:42:54][Step 2/2] I181018 08:38:38.565210 67209 storage/store.go:3640 [s2,r1/39:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:38.565589 67209 storage/store.go:3640 [s2,r1/39:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:38.568249 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 304, log entries: 19, rate-limit: 2.0 MiB/sec, 8ms
[08:42:54][Step 2/2] I181018 08:38:38.569779 67717 storage/replica.go:863 [replicaGC,s2,r1/39:/M{in-ax}] removed 48 (43+5) keys in 6ms [clear=6ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:38.578604 67735 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 197 (id=1e5af404, encoded size=28753, 1 rocksdb batches, 19 log entries)
[08:42:54][Step 2/2] I181018 08:38:38.598005 67735 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 19ms [clear=6ms batch=0ms entries=10ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:38.601664 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:38.605963 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):40): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=40, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.624728 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=34843407] proposing ADD_REPLICA((n2,s2):40): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):40] next=41
[08:42:54][Step 2/2] I181018 08:38:38.641409 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):40): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):40, next=41, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.658208 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b6ea6999] proposing REMOVE_REPLICA((n2,s2):40): updated=[(n1,s1):1 (n3,s3):3] next=41
[08:42:54][Step 2/2] I181018 08:38:38.670328 67209 storage/store.go:3640 [s2,r1/40:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:38.673748 67751 storage/store.go:2580 [replicaGC,s2,r1/40:/M{in-ax}] removing replica r1/40
[08:42:54][Step 2/2] I181018 08:38:38.679909 67751 storage/replica.go:863 [replicaGC,s2,r1/40:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=2ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:38.681303 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 75dba9ee at applied index 201
[08:42:54][Step 2/2] I181018 08:38:38.688749 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 310, log entries: 23, rate-limit: 2.0 MiB/sec, 11ms
[08:42:54][Step 2/2] I181018 08:38:38.696441 67753 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 201 (id=75dba9ee, encoded size=30802, 1 rocksdb batches, 23 log entries)
[08:42:54][Step 2/2] I181018 08:38:38.719224 67753 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 22ms [clear=8ms batch=0ms entries=12ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:38.724326 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):41): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=41, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.737011 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=5b72734b] proposing ADD_REPLICA((n2,s2):41): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):41] next=42
[08:42:54][Step 2/2] I181018 08:38:38.748978 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):41): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):41, next=42, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.767635 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6d35e236] proposing REMOVE_REPLICA((n2,s2):41): updated=[(n1,s1):1 (n3,s3):3] next=42
[08:42:54][Step 2/2] I181018 08:38:38.782782 67209 storage/store.go:3640 [s2,r1/41:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:38.784234 67209 storage/store.go:3640 [s2,r1/41:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:38.785442 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 8105b43c at applied index 205
[08:42:54][Step 2/2] I181018 08:38:38.789069 67764 storage/store.go:2580 [replicaGC,s2,r1/41:/M{in-ax}] removing replica r1/41
[08:42:54][Step 2/2] I181018 08:38:38.790283 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 316, log entries: 27, rate-limit: 2.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:38:38.796723 67764 storage/replica.go:863 [replicaGC,s2,r1/41:/M{in-ax}] removed 48 (43+5) keys in 7ms [clear=6ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:38.804138 67741 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 205 (id=8105b43c, encoded size=32761, 1 rocksdb batches, 27 log entries)
[08:42:54][Step 2/2] I181018 08:38:38.828388 67741 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 24ms [clear=5ms batch=0ms entries=17ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:38.831299 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):42): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=42, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.841747 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b148d3eb] proposing ADD_REPLICA((n2,s2):42): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):42] next=43
[08:42:54][Step 2/2] I181018 08:38:38.861501 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):42): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):42, next=43, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.897441 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=aa02775e] proposing REMOVE_REPLICA((n2,s2):42): updated=[(n1,s1):1 (n3,s3):3] next=43
[08:42:54][Step 2/2] I181018 08:38:38.923913 67664 storage/store.go:2580 [replicaGC,s2,r1/42:/M{in-ax}] removing replica r1/42
[08:42:54][Step 2/2] I181018 08:38:38.927690 67664 storage/replica.go:863 [replicaGC,s2,r1/42:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:38.929669 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 5e94940c at applied index 211
[08:42:54][Step 2/2] I181018 08:38:38.934350 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 324, log entries: 33, rate-limit: 2.0 MiB/sec, 9ms
[08:42:54][Step 2/2] I181018 08:38:38.940384 67705 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 211 (id=5e94940c, encoded size=35064, 1 rocksdb batches, 33 log entries)
[08:42:54][Step 2/2] I181018 08:38:38.954090 67705 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 13ms [clear=3ms batch=0ms entries=8ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:38.959498 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):43): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=43, gen=0]
[08:42:54][Step 2/2] I181018 08:38:38.974598 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0c75b94d] proposing ADD_REPLICA((n2,s2):43): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):43] next=44
[08:42:54][Step 2/2] I181018 08:38:38.989544 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):43): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):43, next=44, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.008815 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=f8ccca6a] proposing REMOVE_REPLICA((n2,s2):43): updated=[(n1,s1):1 (n3,s3):3] next=44
[08:42:54][Step 2/2] I181018 08:38:39.029033 67209 storage/store.go:3640 [s2,r1/43:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:39.030067 67707 storage/store.go:2580 [replicaGC,s2,r1/43:/M{in-ax}] removing replica r1/43
[08:42:54][Step 2/2] I181018 08:38:39.031618 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot b1c8c495 at applied index 216
[08:42:54][Step 2/2] I181018 08:38:39.036820 67707 storage/replica.go:863 [replicaGC,s2,r1/43:/M{in-ax}] removed 48 (43+5) keys in 6ms [clear=5ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:39.038356 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 331, log entries: 38, rate-limit: 2.0 MiB/sec, 9ms
[08:42:54][Step 2/2] I181018 08:38:39.044792 67602 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 216 (id=b1c8c495, encoded size=37195, 1 rocksdb batches, 38 log entries)
[08:42:54][Step 2/2] I181018 08:38:39.061718 67602 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 17ms [clear=3ms batch=0ms entries=12ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:39.066807 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):44): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=44, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.080018 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=9b820f32] proposing ADD_REPLICA((n2,s2):44): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):44] next=45
[08:42:54][Step 2/2] I181018 08:38:39.090058 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):44): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):44, next=45, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.106226 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=95ab2463] proposing REMOVE_REPLICA((n2,s2):44): updated=[(n1,s1):1 (n3,s3):3] next=45
[08:42:54][Step 2/2] I181018 08:38:39.119779 67209 storage/store.go:3640 [s2,r1/44:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:39.122673 67709 storage/store.go:2580 [replicaGC,s2,r1/44:/M{in-ax}] removing replica r1/44
[08:42:54][Step 2/2] I181018 08:38:39.124509 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 91b8cb20 at applied index 220
[08:42:54][Step 2/2] I181018 08:38:39.129186 67709 storage/replica.go:863 [replicaGC,s2,r1/44:/M{in-ax}] removed 48 (43+5) keys in 5ms [clear=5ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:39.131940 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 337, log entries: 42, rate-limit: 2.0 MiB/sec, 10ms
[08:42:54][Step 2/2] I181018 08:38:39.139031 67828 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 220 (id=91b8cb20, encoded size=39154, 1 rocksdb batches, 42 log entries)
[08:42:54][Step 2/2] I181018 08:38:39.155895 67828 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 17ms [clear=3ms batch=0ms entries=11ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:39.160515 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):45): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=45, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.172561 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=49de6b80] proposing ADD_REPLICA((n2,s2):45): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):45] next=46
[08:42:54][Step 2/2] I181018 08:38:39.194804 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):45): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):45, next=46, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.211038 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6a372ee3] proposing REMOVE_REPLICA((n2,s2):45): updated=[(n1,s1):1 (n3,s3):3] next=46
[08:42:54][Step 2/2] I181018 08:38:39.234708 67209 storage/store.go:3640 [s2,r1/45:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:39.237679 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot e2195e1f at applied index 224
[08:42:54][Step 2/2] I181018 08:38:39.238959 67813 storage/store.go:2580 [replicaGC,s2,r1/45:/M{in-ax}] removing replica r1/45
[08:42:54][Step 2/2] I181018 08:38:39.245518 67813 storage/replica.go:863 [replicaGC,s2,r1/45:/M{in-ax}] removed 48 (43+5) keys in 6ms [clear=5ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:39.246217 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 343, log entries: 46, rate-limit: 2.0 MiB/sec, 11ms
[08:42:54][Step 2/2] I181018 08:38:39.259018 67713 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 224 (id=e2195e1f, encoded size=41113, 1 rocksdb batches, 46 log entries)
[08:42:54][Step 2/2] I181018 08:38:39.280218 67713 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 21ms [clear=4ms batch=0ms entries=15ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:39.283894 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):46): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=46, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.308225 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=e147e05c] proposing ADD_REPLICA((n2,s2):46): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):46] next=47
[08:42:54][Step 2/2] I181018 08:38:39.320326 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):46): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):46, next=47, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.333894 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=5d67287d] proposing REMOVE_REPLICA((n2,s2):46): updated=[(n1,s1):1 (n3,s3):3] next=47
[08:42:54][Step 2/2] I181018 08:38:39.347990 67768 storage/store.go:2580 [replicaGC,s2,r1/46:/M{in-ax}] removing replica r1/46
[08:42:54][Step 2/2] I181018 08:38:39.349323 67209 storage/store.go:3640 [s2,r1/46:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:39.351014 67209 storage/store.go:3640 [s2,r1/46:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:39.349511 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot ab4becc1 at applied index 230
[08:42:54][Step 2/2] I181018 08:38:39.351989 67768 storage/replica.go:863 [replicaGC,s2,r1/46:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:39.356837 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 351, log entries: 52, rate-limit: 2.0 MiB/sec, 12ms
[08:42:54][Step 2/2] I181018 08:38:39.362647 67803 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 230 (id=ab4becc1, encoded size=43416, 1 rocksdb batches, 52 log entries)
[08:42:54][Step 2/2] I181018 08:38:39.385772 67803 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 23ms [clear=4ms batch=0ms entries=17ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:39.389455 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:39.393085 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):47): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=47, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.409693 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6011e331] proposing ADD_REPLICA((n2,s2):47): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):47] next=48
[08:42:54][Step 2/2] I181018 08:38:39.420273 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):47): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):47, next=48, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.465459 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0ccdd5a2] proposing REMOVE_REPLICA((n2,s2):47): updated=[(n1,s1):1 (n3,s3):3] next=48
[08:42:54][Step 2/2] I181018 08:38:39.488570 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot b2568267 at applied index 235
[08:42:54][Step 2/2] I181018 08:38:39.491712 67209 storage/store.go:3640 [s2,r1/47:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:39.495913 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 358, log entries: 57, rate-limit: 2.0 MiB/sec, 19ms
[08:42:54][Step 2/2] I181018 08:38:39.496484 67810 storage/store.go:2580 [replicaGC,s2,r1/47:/M{in-ax}] removing replica r1/47
[08:42:54][Step 2/2] I181018 08:38:39.501204 67810 storage/replica.go:863 [replicaGC,s2,r1/47:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=3ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:39.508493 67809 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 235 (id=b2568267, encoded size=45637, 1 rocksdb batches, 57 log entries)
[08:42:54][Step 2/2] I181018 08:38:39.530008 67209 storage/store.go:3618 [s2,r1/?:{-}] replica too old response with old replica ID: 47
[08:42:54][Step 2/2] I181018 08:38:39.535943 67809 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 27ms [clear=4ms batch=0ms entries=19ms commit=3ms]
[08:42:54][Step 2/2] I181018 08:38:39.540861 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):48): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=48, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.552201 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=081707af] proposing ADD_REPLICA((n2,s2):48): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):48] next=49
[08:42:54][Step 2/2] I181018 08:38:39.565512 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):48): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):48, next=49, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.578774 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=7fb6c5f4] proposing REMOVE_REPLICA((n2,s2):48): updated=[(n1,s1):1 (n3,s3):3] next=49
[08:42:54][Step 2/2] I181018 08:38:39.596176 67209 storage/store.go:3640 [s2,r1/48:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:39.596647 67209 storage/store.go:3640 [s2,r1/48:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:39.599874 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 78da3a8e at applied index 239
[08:42:54][Step 2/2] I181018 08:38:39.600871 67817 storage/store.go:2580 [replicaGC,s2,r1/48:/M{in-ax}] removing replica r1/48
[08:42:54][Step 2/2] I181018 08:38:39.607039 67817 storage/replica.go:863 [replicaGC,s2,r1/48:/M{in-ax}] removed 48 (43+5) keys in 5ms [clear=5ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:39.607727 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 364, log entries: 61, rate-limit: 2.0 MiB/sec, 16ms
[08:42:54][Step 2/2] I181018 08:38:39.613884 67821 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 239 (id=78da3a8e, encoded size=47596, 1 rocksdb batches, 61 log entries)
[08:42:54][Step 2/2] I181018 08:38:39.641815 67821 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 28ms [clear=3ms batch=0ms entries=22ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:39.646064 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):49): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=49, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.657058 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=1e9badb7] proposing ADD_REPLICA((n2,s2):49): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):49] next=50
[08:42:54][Step 2/2] I181018 08:38:39.669528 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):49): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):49, next=50, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.683899 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=ba82d883] proposing REMOVE_REPLICA((n2,s2):49): updated=[(n1,s1):1 (n3,s3):3] next=50
[08:42:54][Step 2/2] I181018 08:38:39.701544 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot a12b7ca9 at applied index 243
[08:42:54][Step 2/2] I181018 08:38:39.702505 67209 storage/store.go:3640 [s2,r1/49:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:39.702981 67209 storage/store.go:3640 [s2,r1/49:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:39.708692 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 370, log entries: 65, rate-limit: 2.0 MiB/sec, 11ms
[08:42:54][Step 2/2] I181018 08:38:39.716919 67838 storage/store.go:2580 [replicaGC,s2,r1/49:/M{in-ax}] removing replica r1/49
[08:42:54][Step 2/2] I181018 08:38:39.720445 67838 storage/replica.go:863 [replicaGC,s2,r1/49:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:39.731157 67845 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 243 (id=a12b7ca9, encoded size=49555, 1 rocksdb batches, 65 log entries)
[08:42:54][Step 2/2] I181018 08:38:39.764212 67845 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 33ms [clear=5ms batch=2ms entries=23ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:39.768914 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):50): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=50, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.783160 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=e2d354be] proposing ADD_REPLICA((n2,s2):50): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):50] next=51
[08:42:54][Step 2/2] I181018 08:38:39.793003 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):50): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):50, next=51, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.808711 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=1c23e801] proposing REMOVE_REPLICA((n2,s2):50): updated=[(n1,s1):1 (n3,s3):3] next=51
[08:42:54][Step 2/2] I181018 08:38:39.827428 67209 storage/store.go:3640 [s2,r1/50:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:39.827790 67209 storage/store.go:3640 [s2,r1/50:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:39.830277 67863 storage/store.go:2580 [replicaGC,s2,r1/50:/M{in-ax}] removing replica r1/50
[08:42:54][Step 2/2] I181018 08:38:39.833966 67863 storage/replica.go:863 [replicaGC,s2,r1/50:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:39.834539 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot a90c9287 at applied index 249
[08:42:54][Step 2/2] I181018 08:38:39.840267 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 378, log entries: 71, rate-limit: 2.0 MiB/sec, 11ms
[08:42:54][Step 2/2] I181018 08:38:39.848041 67892 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 249 (id=a90c9287, encoded size=51858, 1 rocksdb batches, 71 log entries)
[08:42:54][Step 2/2] I181018 08:38:39.877737 67892 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 29ms [clear=4ms batch=0ms entries=22ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:39.882527 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):51): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=51, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.900842 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=77368f62] proposing ADD_REPLICA((n2,s2):51): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):51] next=52
[08:42:54][Step 2/2] I181018 08:38:39.914198 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):51): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):51, next=52, gen=0]
[08:42:54][Step 2/2] I181018 08:38:39.932780 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6c095bca] proposing REMOVE_REPLICA((n2,s2):51): updated=[(n1,s1):1 (n3,s3):3] next=52
[08:42:54][Step 2/2] I181018 08:38:39.941235 67209 storage/store.go:3640 [s2,r1/51:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:39.942199 67209 storage/store.go:3640 [s2,r1/51:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:39.946681 67785 storage/store.go:2580 [replicaGC,s2,r1/51:/M{in-ax}] removing replica r1/51
[08:42:54][Step 2/2] I181018 08:38:39.948565 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 143e924e at applied index 254
[08:42:54][Step 2/2] I181018 08:38:39.949374 67785 storage/replica.go:863 [replicaGC,s2,r1/51:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:39.953087 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 385, log entries: 76, rate-limit: 2.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:38:39.958632 67869 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 254 (id=143e924e, encoded size=53989, 1 rocksdb batches, 76 log entries)
[08:42:54][Step 2/2] I181018 08:38:39.996663 67869 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 38ms [clear=4ms batch=0ms entries=30ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:39.999105 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:40.002514 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):52): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=52, gen=0]
[08:42:54][Step 2/2] I181018 08:38:40.030228 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=957a93c3] proposing ADD_REPLICA((n2,s2):52): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):52] next=53
[08:42:54][Step 2/2] I181018 08:38:40.047743 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):52): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):52, next=53, gen=0]
[08:42:54][Step 2/2] I181018 08:38:40.064030 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=30989e7d] proposing REMOVE_REPLICA((n2,s2):52): updated=[(n1,s1):1 (n3,s3):3] next=53
[08:42:54][Step 2/2] I181018 08:38:40.075337 67209 storage/store.go:3640 [s2,r1/52:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:40.075500 67898 storage/store.go:2580 [replicaGC,s2,r1/52:/M{in-ax}] removing replica r1/52
[08:42:54][Step 2/2] I181018 08:38:40.078921 67898 storage/replica.go:863 [replicaGC,s2,r1/52:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:40.081452 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot e13d7fb9 at applied index 258
[08:42:54][Step 2/2] I181018 08:38:40.087227 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 391, log entries: 80, rate-limit: 2.0 MiB/sec, 10ms
[08:42:54][Step 2/2] I181018 08:38:40.095459 67841 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 258 (id=e13d7fb9, encoded size=56038, 1 rocksdb batches, 80 log entries)
[08:42:54][Step 2/2] I181018 08:38:40.127321 67841 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 32ms [clear=4ms batch=0ms entries=25ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:40.133434 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):53): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=53, gen=0]
[08:42:54][Step 2/2] I181018 08:38:40.146660 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=9eb7f839] proposing ADD_REPLICA((n2,s2):53): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):53] next=54
[08:42:54][Step 2/2] I181018 08:38:40.164778 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):53): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):53, next=54, gen=0]
[08:42:54][Step 2/2] I181018 08:38:40.191105 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=380d2471] proposing REMOVE_REPLICA((n2,s2):53): updated=[(n1,s1):1 (n3,s3):3] next=54
[08:42:54][Step 2/2] I181018 08:38:40.213273 67209 storage/store.go:3640 [s2,r1/53:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:40.214989 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 584e4522 at applied index 263
[08:42:54][Step 2/2] I181018 08:38:40.219838 67886 storage/store.go:2580 [replicaGC,s2,r1/53:/M{in-ax}] removing replica r1/53
[08:42:54][Step 2/2] I181018 08:38:40.224102 67886 storage/replica.go:863 [replicaGC,s2,r1/53:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:40.224402 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 398, log entries: 85, rate-limit: 2.0 MiB/sec, 13ms
[08:42:54][Step 2/2] I181018 08:38:40.234269 67776 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 263 (id=584e4522, encoded size=58169, 1 rocksdb batches, 85 log entries)
[08:42:54][Step 2/2] I181018 08:38:40.268028 67776 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 34ms [clear=4ms batch=0ms entries=27ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:40.272719 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):54): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=54, gen=0]
[08:42:54][Step 2/2] I181018 08:38:40.285905 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b3b9409a] proposing ADD_REPLICA((n2,s2):54): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):54] next=55
[08:42:54][Step 2/2] I181018 08:38:40.303187 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):54): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):54, next=55, gen=0]
[08:42:54][Step 2/2] I181018 08:38:40.318007 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0301a801] proposing REMOVE_REPLICA((n2,s2):54): updated=[(n1,s1):1 (n3,s3):3] next=55
[08:42:54][Step 2/2] I181018 08:38:40.339216 67903 storage/store.go:2580 [replicaGC,s2,r1/54:/M{in-ax}] removing replica r1/54
[08:42:54][Step 2/2] I181018 08:38:40.342324 67903 storage/replica.go:863 [replicaGC,s2,r1/54:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:40.347415 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 90d76fae at applied index 269
[08:42:54][Step 2/2] I181018 08:38:40.353298 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 405, log entries: 91, rate-limit: 2.0 MiB/sec, 18ms
[08:42:54][Step 2/2] I181018 08:38:40.360303 67912 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 269 (id=90d76fae, encoded size=60318, 1 rocksdb batches, 91 log entries)
[08:42:54][Step 2/2] I181018 08:38:40.390109 67912 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 30ms [clear=4ms batch=0ms entries=23ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:40.395643 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):55): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=55, gen=0]
[08:42:54][Step 2/2] I181018 08:38:40.413350 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0c7b35b8] proposing ADD_REPLICA((n2,s2):55): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):55] next=56
[08:42:54][Step 2/2] I181018 08:38:40.435727 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):55): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):55, next=56, gen=0]
[08:42:54][Step 2/2] I181018 08:38:40.462704 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=78560058] proposing REMOVE_REPLICA((n2,s2):55): updated=[(n1,s1):1 (n3,s3):3] next=56
[08:42:54][Step 2/2] I181018 08:38:40.477791 67941 storage/store.go:2580 [replicaGC,s2,r1/55:/M{in-ax}] removing replica r1/55
[08:42:54][Step 2/2] I181018 08:38:40.481242 67941 storage/replica.go:863 [replicaGC,s2,r1/55:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:40.490481 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:40.490734 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 664b0765 at applied index 274
[08:42:54][Step 2/2] I181018 08:38:40.496542 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 412, log entries: 96, rate-limit: 2.0 MiB/sec, 17ms
[08:42:54][Step 2/2] I181018 08:38:40.505242 67924 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 274 (id=664b0765, encoded size=62449, 1 rocksdb batches, 96 log entries)
[08:42:54][Step 2/2] I181018 08:38:40.541076 67924 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 35ms [clear=4ms batch=0ms entries=29ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:40.544723 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):56): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=56, gen=0]
[08:42:54][Step 2/2] I181018 08:38:40.560113 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6f88e91d] proposing ADD_REPLICA((n2,s2):56): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):56] next=57
[08:42:54][Step 2/2] I181018 08:38:40.567745 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):56): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):56, next=57, gen=0]
[08:42:54][Step 2/2] I181018 08:38:40.589147 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=aaa6a575] proposing REMOVE_REPLICA((n2,s2):56): updated=[(n1,s1):1 (n3,s3):3] next=57
[08:42:54][Step 2/2] I181018 08:38:40.604375 67209 storage/store.go:3640 [s2,r1/56:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:40.605974 67955 storage/store.go:2580 [replicaGC,s2,r1/56:/M{in-ax}] removing replica r1/56
[08:42:54][Step 2/2] I181018 08:38:40.610190 67955 storage/replica.go:863 [replicaGC,s2,r1/56:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:40.617469 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 234162b5 at applied index 278
[08:42:54][Step 2/2] I181018 08:38:40.626942 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 418, log entries: 100, rate-limit: 2.0 MiB/sec, 23ms
[08:42:54][Step 2/2] I181018 08:38:40.638302 67906 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 278 (id=234162b5, encoded size=64408, 1 rocksdb batches, 100 log entries)
[08:42:54][Step 2/2] I181018 08:38:40.679946 67906 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 41ms [clear=4ms batch=0ms entries=34ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:40.685268 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):57): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=57, gen=0]
[08:42:54][Step 2/2] I181018 08:38:40.717739 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=9f03f161] proposing ADD_REPLICA((n2,s2):57): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):57] next=58
[08:42:54][Step 2/2] I181018 08:38:40.736673 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):57): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):57, next=58, gen=0]
[08:42:54][Step 2/2] I181018 08:38:40.757442 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=247ab23a] proposing REMOVE_REPLICA((n2,s2):57): updated=[(n1,s1):1 (n3,s3):3] next=58
[08:42:54][Step 2/2] I181018 08:38:40.776692 67209 storage/store.go:3640 [s2,r1/57:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:40.779084 67943 storage/store.go:2580 [replicaGC,s2,r1/57:/M{in-ax}] removing replica r1/57
[08:42:54][Step 2/2] I181018 08:38:40.780947 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 4c64d3c5 at applied index 284
[08:42:54][Step 2/2] I181018 08:38:40.787119 67209 storage/store.go:3640 [s2,r1/57:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:40.787461 67943 storage/replica.go:863 [replicaGC,s2,r1/57:/M{in-ax}] removed 48 (43+5) keys in 5ms [clear=5ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:40.797127 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 426, log entries: 106, rate-limit: 2.0 MiB/sec, 19ms
[08:42:54][Step 2/2] I181018 08:38:40.806695 67858 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 284 (id=4c64d3c5, encoded size=66711, 1 rocksdb batches, 106 log entries)
[08:42:54][Step 2/2] I181018 08:38:40.843566 67858 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 37ms [clear=3ms batch=0ms entries=30ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:40.849460 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):58): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=58, gen=0]
[08:42:54][Step 2/2] I181018 08:38:40.866216 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b3a1475b] proposing ADD_REPLICA((n2,s2):58): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):58] next=59
[08:42:54][Step 2/2] I181018 08:38:40.876431 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):58): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):58, next=59, gen=0]
[08:42:54][Step 2/2] I181018 08:38:40.906721 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=f67cea78] proposing REMOVE_REPLICA((n2,s2):58): updated=[(n1,s1):1 (n3,s3):3] next=59
[08:42:54][Step 2/2] I181018 08:38:40.922969 67929 storage/store.go:2580 [replicaGC,s2,r1/58:/M{in-ax}] removing replica r1/58
[08:42:54][Step 2/2] I181018 08:38:40.923813 67209 storage/store.go:3640 [s2,r1/58:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:40.926103 67929 storage/replica.go:863 [replicaGC,s2,r1/58:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:40.927439 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 499677cf at applied index 289
[08:42:54][Step 2/2] I181018 08:38:40.934011 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 433, log entries: 111, rate-limit: 2.0 MiB/sec, 9ms
[08:42:54][Step 2/2] I181018 08:38:40.940288 67948 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 289 (id=499677cf, encoded size=68842, 1 rocksdb batches, 111 log entries)
[08:42:54][Step 2/2] I181018 08:38:40.983490 67948 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 43ms [clear=4ms batch=0ms entries=36ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:40.989221 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):59): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=59, gen=0]
[08:42:54][Step 2/2] I181018 08:38:41.003938 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=4f2df061] proposing ADD_REPLICA((n2,s2):59): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):59] next=60
[08:42:54][Step 2/2] I181018 08:38:41.017379 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):59): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):59, next=60, gen=0]
[08:42:54][Step 2/2] I181018 08:38:41.042283 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=1650736a] proposing REMOVE_REPLICA((n2,s2):59): updated=[(n1,s1):1 (n3,s3):3] next=60
[08:42:54][Step 2/2] I181018 08:38:41.061222 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 5e9813d2 at applied index 293
[08:42:54][Step 2/2] I181018 08:38:41.067969 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 439, log entries: 115, rate-limit: 2.0 MiB/sec, 12ms
[08:42:54][Step 2/2] I181018 08:38:41.069685 68004 storage/store.go:2580 [replicaGC,s2,r1/59:/M{in-ax}] removing replica r1/59
[08:42:54][Step 2/2] I181018 08:38:41.071237 67209 storage/store.go:3640 [s2,r1/59:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:41.073395 68004 storage/replica.go:863 [replicaGC,s2,r1/59:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:41.081535 67951 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 293 (id=5e9813d2, encoded size=70801, 1 rocksdb batches, 115 log entries)
[08:42:54][Step 2/2] I181018 08:38:41.123619 67951 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 42ms [clear=3ms batch=0ms entries=36ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:41.128341 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:41.130786 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):60): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=60, gen=0]
[08:42:54][Step 2/2] I181018 08:38:41.151645 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=13411c79] proposing ADD_REPLICA((n2,s2):60): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):60] next=61
[08:42:54][Step 2/2] I181018 08:38:41.164239 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):60): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):60, next=61, gen=0]
[08:42:54][Step 2/2] I181018 08:38:41.177163 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=da71c1d2] proposing REMOVE_REPLICA((n2,s2):60): updated=[(n1,s1):1 (n3,s3):3] next=61
[08:42:54][Step 2/2] I181018 08:38:41.188095 67209 storage/store.go:3640 [s2,r1/60:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:41.188670 67209 storage/store.go:3640 [s2,r1/60:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:41.189716 67962 storage/store.go:2580 [replicaGC,s2,r1/60:/M{in-ax}] removing replica r1/60
[08:42:54][Step 2/2] I181018 08:38:41.192206 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 5c2b7318 at applied index 299
[08:42:54][Step 2/2] I181018 08:38:41.192930 67962 storage/replica.go:863 [replicaGC,s2,r1/60:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:41.197720 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 447, log entries: 121, rate-limit: 2.0 MiB/sec, 9ms
[08:42:54][Step 2/2] I181018 08:38:41.207952 67965 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 299 (id=5c2b7318, encoded size=73194, 1 rocksdb batches, 121 log entries)
[08:42:54][Step 2/2] I181018 08:38:41.266656 67965 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 58ms [clear=5ms batch=1ms entries=50ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:41.272186 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):61): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=61, gen=0]
[08:42:54][Step 2/2] I181018 08:38:41.284911 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=ab946f2f] proposing ADD_REPLICA((n2,s2):61): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):61] next=62
[08:42:54][Step 2/2] I181018 08:38:41.297096 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):61): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):61, next=62, gen=0]
[08:42:54][Step 2/2] I181018 08:38:41.311433 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=54406e75] proposing REMOVE_REPLICA((n2,s2):61): updated=[(n1,s1):1 (n3,s3):3] next=62
[08:42:54][Step 2/2] I181018 08:38:41.324300 68007 storage/store.go:2580 [replicaGC,s2,r1/61:/M{in-ax}] removing replica r1/61
[08:42:54][Step 2/2] I181018 08:38:41.325098 67209 storage/store.go:3640 [s2,r1/61:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:41.325613 67209 storage/store.go:3640 [s2,r1/61:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:41.329447 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot acec36db at applied index 304
[08:42:54][Step 2/2] I181018 08:38:41.340497 68007 storage/replica.go:863 [replicaGC,s2,r1/61:/M{in-ax}] removed 48 (43+5) keys in 15ms [clear=15ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:41.342623 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 454, log entries: 126, rate-limit: 2.0 MiB/sec, 16ms
[08:42:54][Step 2/2] I181018 08:38:41.352826 67979 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 304 (id=acec36db, encoded size=75325, 1 rocksdb batches, 126 log entries)
[08:42:54][Step 2/2] I181018 08:38:41.407720 67979 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 55ms [clear=6ms batch=0ms entries=46ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:41.412522 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):62): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=62, gen=0]
[08:42:54][Step 2/2] I181018 08:38:41.427853 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=c2d1acc6] proposing ADD_REPLICA((n2,s2):62): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):62] next=63
[08:42:54][Step 2/2] I181018 08:38:41.440984 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):62): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):62, next=63, gen=0]
[08:42:54][Step 2/2] I181018 08:38:41.456922 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=370cd9f2] proposing REMOVE_REPLICA((n2,s2):62): updated=[(n1,s1):1 (n3,s3):3] next=63
[08:42:54][Step 2/2] I181018 08:38:41.483656 67209 storage/store.go:3640 [s2,r1/62:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:41.489796 67209 storage/store.go:3640 [s2,r1/62:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:41.492750 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot d6b586f1 at applied index 308
[08:42:54][Step 2/2] I181018 08:38:41.493735 68036 storage/store.go:2580 [replicaGC,s2,r1/62:/M{in-ax}] removing replica r1/62
[08:42:54][Step 2/2] I181018 08:38:41.497275 68036 storage/replica.go:863 [replicaGC,s2,r1/62:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:41.504719 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 460, log entries: 130, rate-limit: 2.0 MiB/sec, 17ms
[08:42:54][Step 2/2] I181018 08:38:41.514427 67920 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 308 (id=d6b586f1, encoded size=77284, 1 rocksdb batches, 130 log entries)
[08:42:54][Step 2/2] I181018 08:38:41.565260 67920 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 50ms [clear=5ms batch=0ms entries=42ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:41.569392 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):63): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=63, gen=0]
[08:42:54][Step 2/2] I181018 08:38:41.584360 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=5d79f46d] proposing ADD_REPLICA((n2,s2):63): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):63] next=64
[08:42:54][Step 2/2] I181018 08:38:41.597013 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):63): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):63, next=64, gen=0]
[08:42:54][Step 2/2] I181018 08:38:41.611699 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=01792749] proposing REMOVE_REPLICA((n2,s2):63): updated=[(n1,s1):1 (n3,s3):3] next=64
[08:42:54][Step 2/2] I181018 08:38:41.625967 67209 storage/store.go:3640 [s2,r1/63:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:41.627786 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 2ab27c96 at applied index 314
[08:42:54][Step 2/2] I181018 08:38:41.632529 67937 storage/store.go:2580 [replicaGC,s2,r1/63:/M{in-ax}] removing replica r1/63
[08:42:54][Step 2/2] I181018 08:38:41.636024 67937 storage/replica.go:863 [replicaGC,s2,r1/63:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:41.636466 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 468, log entries: 136, rate-limit: 2.0 MiB/sec, 11ms
[08:42:54][Step 2/2] I181018 08:38:41.652277 68067 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 314 (id=2ab27c96, encoded size=79587, 1 rocksdb batches, 136 log entries)
[08:42:54][Step 2/2] I181018 08:38:41.720479 68067 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 68ms [clear=5ms batch=0ms entries=57ms commit=5ms]
[08:42:54][Step 2/2] I181018 08:38:41.725702 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):64): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=64, gen=0]
[08:42:54][Step 2/2] I181018 08:38:41.738803 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=e56428cb] proposing ADD_REPLICA((n2,s2):64): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):64] next=65
[08:42:54][Step 2/2] I181018 08:38:41.749500 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):64): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):64, next=65, gen=0]
[08:42:54][Step 2/2] I181018 08:38:41.765352 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=a4ecca04] proposing REMOVE_REPLICA((n2,s2):64): updated=[(n1,s1):1 (n3,s3):3] next=65
[08:42:54][Step 2/2] I181018 08:38:41.777056 67209 storage/store.go:3640 [s2,r1/64:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:41.777672 67209 storage/store.go:3640 [s2,r1/64:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:41.780810 68068 storage/store.go:2580 [replicaGC,s2,r1/64:/M{in-ax}] removing replica r1/64
[08:42:54][Step 2/2] I181018 08:38:41.780950 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 912eca07 at applied index 319
[08:42:54][Step 2/2] I181018 08:38:41.789298 68068 storage/replica.go:863 [replicaGC,s2,r1/64:/M{in-ax}] removed 48 (43+5) keys in 7ms [clear=7ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:41.791606 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 475, log entries: 141, rate-limit: 2.0 MiB/sec, 13ms
[08:42:54][Step 2/2] I181018 08:38:41.801740 67998 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 319 (id=912eca07, encoded size=81718, 1 rocksdb batches, 141 log entries)
[08:42:54][Step 2/2] I181018 08:38:41.850672 67998 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 49ms [clear=6ms batch=0ms entries=40ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:41.854846 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):65): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=65, gen=0]
[08:42:54][Step 2/2] I181018 08:38:41.866724 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=3a4d9eed] proposing ADD_REPLICA((n2,s2):65): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):65] next=66
[08:42:54][Step 2/2] I181018 08:38:41.878893 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):65): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):65, next=66, gen=0]
[08:42:54][Step 2/2] I181018 08:38:41.896168 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=33aaf8c9] proposing REMOVE_REPLICA((n2,s2):65): updated=[(n1,s1):1 (n3,s3):3] next=66
[08:42:54][Step 2/2] I181018 08:38:41.906786 67209 storage/store.go:3640 [s2,r1/65:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:41.910421 67209 storage/store.go:3640 [s2,r1/65:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:41.912508 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 977353da at applied index 323
[08:42:54][Step 2/2] I181018 08:38:41.919179 68053 storage/store.go:2580 [replicaGC,s2,r1/65:/M{in-ax}] removing replica r1/65
[08:42:54][Step 2/2] I181018 08:38:41.919246 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 481, log entries: 145, rate-limit: 2.0 MiB/sec, 10ms
[08:42:54][Step 2/2] I181018 08:38:41.922713 68053 storage/replica.go:863 [replicaGC,s2,r1/65:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:41.933909 68073 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 323 (id=977353da, encoded size=83677, 1 rocksdb batches, 145 log entries)
[08:42:54][Step 2/2] I181018 08:38:42.002849 68073 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 69ms [clear=6ms batch=0ms entries=59ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:42.006761 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):66): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=66, gen=0]
[08:42:54][Step 2/2] I181018 08:38:42.021287 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=791d4d3f] proposing ADD_REPLICA((n2,s2):66): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):66] next=67
[08:42:54][Step 2/2] I181018 08:38:42.033506 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):66): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):66, next=67, gen=0]
[08:42:54][Step 2/2] I181018 08:38:42.045239 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=7adfa04f] proposing REMOVE_REPLICA((n2,s2):66): updated=[(n1,s1):1 (n3,s3):3] next=67
[08:42:54][Step 2/2] I181018 08:38:42.055411 67209 storage/store.go:3640 [s2,r1/66:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:42.059166 68002 storage/store.go:2580 [replicaGC,s2,r1/66:/M{in-ax}] removing replica r1/66
[08:42:54][Step 2/2] I181018 08:38:42.060327 67209 storage/store.go:3640 [s2,r1/66:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:42.061103 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 9439858a at applied index 329
[08:42:54][Step 2/2] I181018 08:38:42.069128 68002 storage/replica.go:863 [replicaGC,s2,r1/66:/M{in-ax}] removed 48 (43+5) keys in 7ms [clear=7ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:42.072504 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 489, log entries: 151, rate-limit: 2.0 MiB/sec, 14ms
[08:42:54][Step 2/2] I181018 08:38:42.083043 67984 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 329 (id=9439858a, encoded size=85977, 1 rocksdb batches, 151 log entries)
[08:42:54][Step 2/2] I181018 08:38:42.163281 67984 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 80ms [clear=9ms batch=0ms entries=67ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:42.165195 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:42.168296 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):67): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=67, gen=0]
[08:42:54][Step 2/2] I181018 08:38:42.184131 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=f8cd6ff8] proposing ADD_REPLICA((n2,s2):67): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):67] next=68
[08:42:54][Step 2/2] I181018 08:38:42.195559 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):67): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):67, next=68, gen=0]
[08:42:54][Step 2/2] I181018 08:38:42.212925 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=c9aeedcd] proposing REMOVE_REPLICA((n2,s2):67): updated=[(n1,s1):1 (n3,s3):3] next=68
[08:42:54][Step 2/2] I181018 08:38:42.237382 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot abd96139 at applied index 334
[08:42:54][Step 2/2] I181018 08:38:42.237464 68033 storage/store.go:2580 [replicaGC,s2,r1/67:/M{in-ax}] removing replica r1/67
[08:42:54][Step 2/2] I181018 08:38:42.244903 67209 storage/store.go:3640 [s2,r1/67:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:42.247030 68033 storage/replica.go:863 [replicaGC,s2,r1/67:/M{in-ax}] removed 48 (43+5) keys in 8ms [clear=7ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:42.249408 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 496, log entries: 156, rate-limit: 2.0 MiB/sec, 15ms
[08:42:54][Step 2/2] I181018 08:38:42.259023 68047 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 334 (id=abd96139, encoded size=88198, 1 rocksdb batches, 156 log entries)
[08:42:54][Step 2/2] I181018 08:38:42.318218 68047 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 59ms [clear=8ms batch=0ms entries=48ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:42.323955 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):68): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=68, gen=0]
[08:42:54][Step 2/2] I181018 08:38:42.336333 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=eae271bd] proposing ADD_REPLICA((n2,s2):68): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):68] next=69
[08:42:54][Step 2/2] I181018 08:38:42.350271 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):68): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):68, next=69, gen=0]
[08:42:54][Step 2/2] I181018 08:38:42.362571 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6cb9bf19] proposing REMOVE_REPLICA((n2,s2):68): updated=[(n1,s1):1 (n3,s3):3] next=69
[08:42:54][Step 2/2] I181018 08:38:42.374956 68048 storage/store.go:2580 [replicaGC,s2,r1/68:/M{in-ax}] removing replica r1/68
[08:42:54][Step 2/2] I181018 08:38:42.377354 67209 storage/store.go:3640 [s2,r1/68:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:42.377753 67209 storage/store.go:3640 [s2,r1/68:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:42.378886 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot bae9d0b9 at applied index 338
[08:42:54][Step 2/2] I181018 08:38:42.380035 68048 storage/replica.go:863 [replicaGC,s2,r1/68:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:42.390987 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 502, log entries: 160, rate-limit: 2.0 MiB/sec, 18ms
[08:42:54][Step 2/2] I181018 08:38:42.401570 68100 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 338 (id=bae9d0b9, encoded size=90157, 1 rocksdb batches, 160 log entries)
[08:42:54][Step 2/2] I181018 08:38:42.467008 68100 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 65ms [clear=7ms batch=0ms entries=55ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:42.471089 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):69): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=69, gen=0]
[08:42:54][Step 2/2] I181018 08:38:42.486022 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=3d6845aa] proposing ADD_REPLICA((n2,s2):69): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):69] next=70
[08:42:54][Step 2/2] I181018 08:38:42.523619 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):69): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):69, next=70, gen=0]
[08:42:54][Step 2/2] I181018 08:38:42.539416 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=3bcd2465] proposing REMOVE_REPLICA((n2,s2):69): updated=[(n1,s1):1 (n3,s3):3] next=70
[08:42:54][Step 2/2] I181018 08:38:42.554225 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 5e69c5d1 at applied index 344
[08:42:54][Step 2/2] I181018 08:38:42.559193 67209 storage/store.go:3640 [s2,r1/69:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:42.559567 67209 storage/store.go:3640 [s2,r1/69:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:42.560724 68050 storage/store.go:2580 [replicaGC,s2,r1/69:/M{in-ax}] removing replica r1/69
[08:42:54][Step 2/2] I181018 08:38:42.562105 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 510, log entries: 166, rate-limit: 2.0 MiB/sec, 11ms
[08:42:54][Step 2/2] I181018 08:38:42.563773 68050 storage/replica.go:863 [replicaGC,s2,r1/69:/M{in-ax}] removed 48 (43+5) keys in 2ms [clear=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:42.573109 68015 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 344 (id=5e69c5d1, encoded size=92460, 1 rocksdb batches, 166 log entries)
[08:42:54][Step 2/2] I181018 08:38:42.642967 68015 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 70ms [clear=6ms batch=0ms entries=59ms commit=3ms]
[08:42:54][Step 2/2] I181018 08:38:42.644873 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:42.648652 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):70): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=70, gen=0]
[08:42:54][Step 2/2] I181018 08:38:42.675121 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=1caa1c24] proposing ADD_REPLICA((n2,s2):70): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):70] next=71
[08:42:54][Step 2/2] I181018 08:38:42.688447 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):70): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):70, next=71, gen=0]
[08:42:54][Step 2/2] I181018 08:38:42.709900 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=837bc550] proposing REMOVE_REPLICA((n2,s2):70): updated=[(n1,s1):1 (n3,s3):3] next=71
[08:42:54][Step 2/2] I181018 08:38:42.717991 67209 storage/store.go:3640 [s2,r1/70:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:42.726047 68097 storage/store.go:2580 [replicaGC,s2,r1/70:/M{in-ax}] removing replica r1/70
[08:42:54][Step 2/2] I181018 08:38:42.734115 67209 storage/store.go:3640 [s2,r1/70:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:42.735510 68097 storage/replica.go:863 [replicaGC,s2,r1/70:/M{in-ax}] removed 48 (43+5) keys in 4ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:42.743161 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot b9e35d53 at applied index 350
[08:42:54][Step 2/2] I181018 08:38:42.748643 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 517, log entries: 4, rate-limit: 2.0 MiB/sec, 21ms
[08:42:54][Step 2/2] I181018 08:38:42.757432 68163 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 350 (id=b9e35d53, encoded size=35457, 1 rocksdb batches, 4 log entries)
[08:42:54][Step 2/2] I181018 08:38:42.769146 68163 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 11ms [clear=7ms batch=0ms entries=1ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:42.773504 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):71): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=71, gen=0]
[08:42:54][Step 2/2] I181018 08:38:42.785260 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=7166e1bb] proposing ADD_REPLICA((n2,s2):71): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):71] next=72
[08:42:54][Step 2/2] I181018 08:38:42.795356 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):71): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):71, next=72, gen=0]
[08:42:54][Step 2/2] I181018 08:38:42.807142 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=ed80a756] proposing REMOVE_REPLICA((n2,s2):71): updated=[(n1,s1):1 (n3,s3):3] next=72
[08:42:54][Step 2/2] I181018 08:38:42.828897 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 49d0ad8b at applied index 354
[08:42:54][Step 2/2] I181018 08:38:42.829861 67209 storage/store.go:3640 [s2,r1/71:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:42.831721 67209 storage/store.go:3640 [s2,r1/71:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:42.833740 68164 storage/store.go:2580 [replicaGC,s2,r1/71:/M{in-ax}] removing replica r1/71
[08:42:54][Step 2/2] I181018 08:38:42.836383 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 523, log entries: 8, rate-limit: 2.0 MiB/sec, 21ms
[08:42:54][Step 2/2] I181018 08:38:42.838804 68164 storage/replica.go:863 [replicaGC,s2,r1/71:/M{in-ax}] removed 48 (43+5) keys in 4ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:42.849145 68081 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 354 (id=49d0ad8b, encoded size=37416, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:42.861670 68081 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 12ms [clear=7ms batch=0ms entries=3ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:42.865866 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):72): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=72, gen=0]
[08:42:54][Step 2/2] I181018 08:38:42.878365 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=e6acb36a] proposing ADD_REPLICA((n2,s2):72): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):72] next=73
[08:42:54][Step 2/2] I181018 08:38:42.889152 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):72): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):72, next=73, gen=0]
[08:42:54][Step 2/2] I181018 08:38:42.903916 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=a4f0d435] proposing REMOVE_REPLICA((n2,s2):72): updated=[(n1,s1):1 (n3,s3):3] next=73
[08:42:54][Step 2/2] I181018 08:38:42.921919 67209 storage/store.go:3640 [s2,r1/72:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:42.922551 67209 storage/store.go:3640 [s2,r1/72:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:42.924006 68135 storage/store.go:2580 [replicaGC,s2,r1/72:/M{in-ax}] removing replica r1/72
[08:42:54][Step 2/2] I181018 08:38:42.926458 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 07653f20 at applied index 359
[08:42:54][Step 2/2] I181018 08:38:42.932389 68135 storage/replica.go:863 [replicaGC,s2,r1/72:/M{in-ax}] removed 48 (43+5) keys in 6ms [clear=5ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:42.936652 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 530, log entries: 13, rate-limit: 2.0 MiB/sec, 15ms
[08:42:54][Step 2/2] I181018 08:38:42.946973 68199 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 359 (id=07653f20, encoded size=39547, 1 rocksdb batches, 13 log entries)
[08:42:54][Step 2/2] I181018 08:38:42.961628 68199 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 14ms [clear=7ms batch=0ms entries=5ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:42.967048 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):73): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=73, gen=0]
[08:42:54][Step 2/2] I181018 08:38:42.983812 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=636edd41] proposing ADD_REPLICA((n2,s2):73): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):73] next=74
[08:42:54][Step 2/2] I181018 08:38:42.996413 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):73): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):73, next=74, gen=0]
[08:42:54][Step 2/2] I181018 08:38:43.010402 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0a6bc850] proposing REMOVE_REPLICA((n2,s2):73): updated=[(n1,s1):1 (n3,s3):3] next=74
[08:42:54][Step 2/2] I181018 08:38:43.031395 67209 storage/store.go:3640 [s2,r1/73:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:43.035948 68184 storage/store.go:2580 [replicaGC,s2,r1/73:/M{in-ax}] removing replica r1/73
[08:42:54][Step 2/2] I181018 08:38:43.039109 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 7cb2d352 at applied index 364
[08:42:54][Step 2/2] I181018 08:38:43.042771 68184 storage/replica.go:863 [replicaGC,s2,r1/73:/M{in-ax}] removed 48 (43+5) keys in 5ms [clear=5ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:43.047104 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 537, log entries: 18, rate-limit: 2.0 MiB/sec, 11ms
[08:42:54][Step 2/2] I181018 08:38:43.058486 68212 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 364 (id=7cb2d352, encoded size=41678, 1 rocksdb batches, 18 log entries)
[08:42:54][Step 2/2] I181018 08:38:43.075480 68212 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 17ms [clear=7ms batch=0ms entries=6ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:43.080722 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):74): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=74, gen=0]
[08:42:54][Step 2/2] I181018 08:38:43.093989 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0c06a97e] proposing ADD_REPLICA((n2,s2):74): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):74] next=75
[08:42:54][Step 2/2] I181018 08:38:43.107841 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):74): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):74, next=75, gen=0]
[08:42:54][Step 2/2] I181018 08:38:43.136112 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=3304d22f] proposing REMOVE_REPLICA((n2,s2):74): updated=[(n1,s1):1 (n3,s3):3] next=75
[08:42:54][Step 2/2] I181018 08:38:43.150071 67209 storage/store.go:3640 [s2,r1/74:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:43.152217 68137 storage/store.go:2580 [replicaGC,s2,r1/74:/M{in-ax}] removing replica r1/74
[08:42:54][Step 2/2] I181018 08:38:43.154140 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 535417e1 at applied index 369
[08:42:54][Step 2/2] I181018 08:38:43.160166 68137 storage/replica.go:863 [replicaGC,s2,r1/74:/M{in-ax}] removed 48 (43+5) keys in 7ms [clear=6ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:43.162752 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 544, log entries: 23, rate-limit: 2.0 MiB/sec, 11ms
[08:42:54][Step 2/2] I181018 08:38:43.172907 68139 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 369 (id=535417e1, encoded size=43809, 1 rocksdb batches, 23 log entries)
[08:42:54][Step 2/2] I181018 08:38:43.188107 68139 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 15ms [clear=5ms batch=0ms entries=7ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:43.189981 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:43.193503 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):75): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=75, gen=0]
[08:42:54][Step 2/2] I181018 08:38:43.213310 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=ae534df2] proposing ADD_REPLICA((n2,s2):75): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):75] next=76
[08:42:54][Step 2/2] I181018 08:38:43.226624 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):75): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):75, next=76, gen=0]
[08:42:54][Step 2/2] I181018 08:38:43.243840 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=cdfe651a] proposing REMOVE_REPLICA((n2,s2):75): updated=[(n1,s1):1 (n3,s3):3] next=76
[08:42:54][Step 2/2] I181018 08:38:43.253273 67209 storage/store.go:3640 [s2,r1/75:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:43.257690 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 83dfd988 at applied index 373
[08:42:54][Step 2/2] I181018 08:38:43.257903 68126 storage/store.go:2580 [replicaGC,s2,r1/75:/M{in-ax}] removing replica r1/75
[08:42:54][Step 2/2] I181018 08:38:43.259251 67209 storage/store.go:3640 [s2,r1/75:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:43.263852 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 550, log entries: 27, rate-limit: 2.0 MiB/sec, 9ms
[08:42:54][Step 2/2] I181018 08:38:43.275090 68126 storage/replica.go:863 [replicaGC,s2,r1/75:/M{in-ax}] removed 48 (43+5) keys in 16ms [clear=16ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:43.284706 68205 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 373 (id=83dfd988, encoded size=45858, 1 rocksdb batches, 27 log entries)
[08:42:54][Step 2/2] I181018 08:38:43.300108 68205 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 15ms [clear=5ms batch=0ms entries=7ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:43.301811 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:43.304209 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):76): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=76, gen=0]
[08:42:54][Step 2/2] I181018 08:38:43.334096 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=cda013ab] proposing ADD_REPLICA((n2,s2):76): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):76] next=77
[08:42:54][Step 2/2] I181018 08:38:43.351376 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):76): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):76, next=77, gen=0]
[08:42:54][Step 2/2] I181018 08:38:43.365770 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=2577ca69] proposing REMOVE_REPLICA((n2,s2):76): updated=[(n1,s1):1 (n3,s3):3] next=77
[08:42:54][Step 2/2] I181018 08:38:43.384429 67209 storage/store.go:3640 [s2,r1/76:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:43.386272 68129 storage/store.go:2580 [replicaGC,s2,r1/76:/M{in-ax}] removing replica r1/76
[08:42:54][Step 2/2] I181018 08:38:43.390398 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 6b342b81 at applied index 378
[08:42:54][Step 2/2] I181018 08:38:43.392323 68129 storage/replica.go:863 [replicaGC,s2,r1/76:/M{in-ax}] removed 48 (43+5) keys in 5ms [clear=5ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:43.399351 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 557, log entries: 32, rate-limit: 2.0 MiB/sec, 14ms
[08:42:54][Step 2/2] I181018 08:38:43.410362 68233 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 378 (id=6b342b81, encoded size=48079, 1 rocksdb batches, 32 log entries)
[08:42:54][Step 2/2] I181018 08:38:43.429318 68233 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 19ms [clear=7ms batch=0ms entries=9ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:43.434098 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):77): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=77, gen=0]
[08:42:54][Step 2/2] I181018 08:38:43.451689 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=5791fc53] proposing ADD_REPLICA((n2,s2):77): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):77] next=78
[08:42:54][Step 2/2] I181018 08:38:43.464433 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):77): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):77, next=78, gen=0]
[08:42:54][Step 2/2] I181018 08:38:43.494967 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=1c4ca968] proposing REMOVE_REPLICA((n2,s2):77): updated=[(n1,s1):1 (n3,s3):3] next=78
[08:42:54][Step 2/2] I181018 08:38:43.512443 67209 storage/store.go:3640 [s2,r1/77:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:43.513056 67209 storage/store.go:3640 [s2,r1/77:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:43.515166 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 35075cc8 at applied index 384
[08:42:54][Step 2/2] I181018 08:38:43.515466 68174 storage/store.go:2580 [replicaGC,s2,r1/77:/M{in-ax}] removing replica r1/77
[08:42:54][Step 2/2] I181018 08:38:43.536852 68174 storage/replica.go:863 [replicaGC,s2,r1/77:/M{in-ax}] removed 48 (43+5) keys in 20ms [clear=20ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:43.538839 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 565, log entries: 38, rate-limit: 2.0 MiB/sec, 29ms
[08:42:54][Step 2/2] I181018 08:38:43.549395 68192 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 384 (id=35075cc8, encoded size=50382, 1 rocksdb batches, 38 log entries)
[08:42:54][Step 2/2] I181018 08:38:43.570533 68192 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 21ms [clear=6ms batch=0ms entries=12ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:43.575825 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):78): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=78, gen=0]
[08:42:54][Step 2/2] I181018 08:38:43.588955 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=a9fb62be] proposing ADD_REPLICA((n2,s2):78): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):78] next=79
[08:42:54][Step 2/2] I181018 08:38:43.602217 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):78): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):78, next=79, gen=0]
[08:42:54][Step 2/2] I181018 08:38:43.635824 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=4c2f7b30] proposing REMOVE_REPLICA((n2,s2):78): updated=[(n1,s1):1 (n3,s3):3] next=79
[08:42:54][Step 2/2] I181018 08:38:43.658564 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 3c86e2db at applied index 388
[08:42:54][Step 2/2] I181018 08:38:43.660156 67209 storage/store.go:3640 [s2,r1/78:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:43.663206 68215 storage/store.go:2580 [replicaGC,s2,r1/78:/M{in-ax}] removing replica r1/78
[08:42:54][Step 2/2] I181018 08:38:43.664233 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 571, log entries: 42, rate-limit: 2.0 MiB/sec, 18ms
[08:42:54][Step 2/2] I181018 08:38:43.667628 68215 storage/replica.go:863 [replicaGC,s2,r1/78:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:43.678542 68248 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 388 (id=3c86e2db, encoded size=52341, 1 rocksdb batches, 42 log entries)
[08:42:54][Step 2/2] I181018 08:38:43.702737 68248 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 24ms [clear=7ms batch=0ms entries=14ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:43.704957 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:43.707673 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):79): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=79, gen=0]
[08:42:54][Step 2/2] I181018 08:38:43.725317 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=570b563b] proposing ADD_REPLICA((n2,s2):79): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):79] next=80
[08:42:54][Step 2/2] I181018 08:38:43.735657 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):79): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):79, next=80, gen=0]
[08:42:54][Step 2/2] I181018 08:38:43.768523 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=bebb9911] proposing REMOVE_REPLICA((n2,s2):79): updated=[(n1,s1):1 (n3,s3):3] next=80
[08:42:54][Step 2/2] I181018 08:38:43.792543 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 0b9bb52e at applied index 392
[08:42:54][Step 2/2] I181018 08:38:43.800039 67209 storage/store.go:3640 [s2,r1/79:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:43.803962 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 577, log entries: 46, rate-limit: 2.0 MiB/sec, 24ms
[08:42:54][Step 2/2] I181018 08:38:43.807137 68263 storage/store.go:2580 [replicaGC,s2,r1/79:/M{in-ax}] removing replica r1/79
[08:42:54][Step 2/2] I181018 08:38:43.817175 68263 storage/replica.go:863 [replicaGC,s2,r1/79:/M{in-ax}] removed 48 (43+5) keys in 9ms [clear=3ms commit=6ms]
[08:42:54][Step 2/2] I181018 08:38:43.828442 68210 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 392 (id=0b9bb52e, encoded size=54390, 1 rocksdb batches, 46 log entries)
[08:42:54][Step 2/2] I181018 08:38:43.856075 68210 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 27ms [clear=8ms batch=0ms entries=16ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:43.860655 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):80): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=80, gen=0]
[08:42:54][Step 2/2] I181018 08:38:43.879185 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=31722c2b] proposing ADD_REPLICA((n2,s2):80): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):80] next=81
[08:42:54][Step 2/2] I181018 08:38:43.888524 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):80): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):80, next=81, gen=0]
[08:42:54][Step 2/2] I181018 08:38:43.904776 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=321f6d78] proposing REMOVE_REPLICA((n2,s2):80): updated=[(n1,s1):1 (n3,s3):3] next=81
[08:42:54][Step 2/2] I181018 08:38:43.919482 68240 storage/store.go:2580 [replicaGC,s2,r1/80:/M{in-ax}] removing replica r1/80
[08:42:54][Step 2/2] I181018 08:38:43.920726 67209 storage/store.go:3640 [s2,r1/80:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:43.921152 67209 storage/store.go:3640 [s2,r1/80:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:43.921711 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 56b9fb2c at applied index 398
[08:42:54][Step 2/2] I181018 08:38:43.927899 68240 storage/replica.go:863 [replicaGC,s2,r1/80:/M{in-ax}] removed 48 (43+5) keys in 7ms [clear=7ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:43.931052 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 585, log entries: 52, rate-limit: 2.0 MiB/sec, 14ms
[08:42:54][Step 2/2] I181018 08:38:43.951782 68219 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 398 (id=56b9fb2c, encoded size=56693, 1 rocksdb batches, 52 log entries)
[08:42:54][Step 2/2] I181018 08:38:43.984998 68219 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 33ms [clear=8ms batch=0ms entries=18ms commit=6ms]
[08:42:54][Step 2/2] I181018 08:38:43.989645 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):81): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=81, gen=0]
[08:42:54][Step 2/2] I181018 08:38:44.001957 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=2fce2b6c] proposing ADD_REPLICA((n2,s2):81): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):81] next=82
[08:42:54][Step 2/2] I181018 08:38:44.015662 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):81): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):81, next=82, gen=0]
[08:42:54][Step 2/2] I181018 08:38:44.036004 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=77834217] proposing REMOVE_REPLICA((n2,s2):81): updated=[(n1,s1):1 (n3,s3):3] next=82
[08:42:54][Step 2/2] I181018 08:38:44.051979 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 32cd24cd at applied index 403
[08:42:54][Step 2/2] I181018 08:38:44.052531 67209 storage/store.go:3640 [s2,r1/81:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:44.057809 68266 storage/store.go:2580 [replicaGC,s2,r1/81:/M{in-ax}] removing replica r1/81
[08:42:54][Step 2/2] I181018 08:38:44.062762 68266 storage/replica.go:863 [replicaGC,s2,r1/81:/M{in-ax}] removed 48 (43+5) keys in 4ms [clear=4ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:44.064325 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 592, log entries: 57, rate-limit: 2.0 MiB/sec, 15ms
[08:42:54][Step 2/2] I181018 08:38:44.075483 68265 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 403 (id=32cd24cd, encoded size=58824, 1 rocksdb batches, 57 log entries)
[08:42:54][Step 2/2] I181018 08:38:44.100628 68265 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 25ms [clear=7ms batch=0ms entries=15ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:44.103663 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:44.106497 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):82): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=82, gen=0]
[08:42:54][Step 2/2] I181018 08:38:44.128947 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=87d17c44] proposing ADD_REPLICA((n2,s2):82): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):82] next=83
[08:42:54][Step 2/2] I181018 08:38:44.142048 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):82): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):82, next=83, gen=0]
[08:42:54][Step 2/2] I181018 08:38:44.156707 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=207ed4d8] proposing REMOVE_REPLICA((n2,s2):82): updated=[(n1,s1):1 (n3,s3):3] next=83
[08:42:54][Step 2/2] I181018 08:38:44.177483 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot e405369a at applied index 407
[08:42:54][Step 2/2] I181018 08:38:44.180555 68277 storage/store.go:2580 [replicaGC,s2,r1/82:/M{in-ax}] removing replica r1/82
[08:42:54][Step 2/2] I181018 08:38:44.184413 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 598, log entries: 61, rate-limit: 2.0 MiB/sec, 12ms
[08:42:54][Step 2/2] I181018 08:38:44.185154 68277 storage/replica.go:863 [replicaGC,s2,r1/82:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:44.198015 68295 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 407 (id=e405369a, encoded size=60873, 1 rocksdb batches, 61 log entries)
[08:42:54][Step 2/2] I181018 08:38:44.231185 68295 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 33ms [clear=8ms batch=0ms entries=22ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:44.235898 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):83): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=83, gen=0]
[08:42:54][Step 2/2] I181018 08:38:44.249381 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=fcd46dc4] proposing ADD_REPLICA((n2,s2):83): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):83] next=84
[08:42:54][Step 2/2] I181018 08:38:44.266908 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):83): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):83, next=84, gen=0]
[08:42:54][Step 2/2] I181018 08:38:44.295103 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=bc6d160b] proposing REMOVE_REPLICA((n2,s2):83): updated=[(n1,s1):1 (n3,s3):3] next=84
[08:42:54][Step 2/2] I181018 08:38:44.333192 67209 storage/store.go:3640 [s2,r1/83:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:44.333573 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 3aa09984 at applied index 413
[08:42:54][Step 2/2] I181018 08:38:44.335632 68272 storage/store.go:2580 [replicaGC,s2,r1/83:/M{in-ax}] removing replica r1/83
[08:42:54][Step 2/2] I181018 08:38:44.343295 68272 storage/replica.go:863 [replicaGC,s2,r1/83:/M{in-ax}] removed 48 (43+5) keys in 6ms [clear=6ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:44.343767 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 606, log entries: 67, rate-limit: 2.0 MiB/sec, 14ms
[08:42:54][Step 2/2] I181018 08:38:44.354026 68271 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 413 (id=3aa09984, encoded size=63176, 1 rocksdb batches, 67 log entries)
[08:42:54][Step 2/2] I181018 08:38:44.388614 68271 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 34ms [clear=8ms batch=0ms entries=22ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:44.393744 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):84): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=84, gen=0]
[08:42:54][Step 2/2] I181018 08:38:44.416508 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=8c942f1c] proposing ADD_REPLICA((n2,s2):84): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):84] next=85
[08:42:54][Step 2/2] I181018 08:38:44.426728 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):84): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):84, next=85, gen=0]
[08:42:54][Step 2/2] I181018 08:38:44.449399 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=8fb24921] proposing REMOVE_REPLICA((n2,s2):84): updated=[(n1,s1):1 (n3,s3):3] next=85
[08:42:54][Step 2/2] I181018 08:38:44.468035 68274 storage/store.go:2580 [replicaGC,s2,r1/84:/M{in-ax}] removing replica r1/84
[08:42:54][Step 2/2] I181018 08:38:44.468869 67209 storage/store.go:3640 [s2,r1/84:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:44.470644 67209 storage/store.go:3640 [s2,r1/84:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:44.472160 68274 storage/replica.go:863 [replicaGC,s2,r1/84:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:44.473808 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 4943fb79 at applied index 418
[08:42:54][Step 2/2] I181018 08:38:44.481189 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 613, log entries: 72, rate-limit: 2.0 MiB/sec, 10ms
[08:42:54][Step 2/2] I181018 08:38:44.481894 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:44.493442 68280 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 418 (id=4943fb79, encoded size=65307, 1 rocksdb batches, 72 log entries)
[08:42:54][Step 2/2] I181018 08:38:44.526193 68280 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 32ms [clear=7ms batch=0ms entries=22ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:44.529477 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):85): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=85, gen=0]
[08:42:54][Step 2/2] I181018 08:38:44.542113 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b91475b9] proposing ADD_REPLICA((n2,s2):85): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):85] next=86
[08:42:54][Step 2/2] I181018 08:38:44.557922 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):85): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):85, next=86, gen=0]
[08:42:54][Step 2/2] I181018 08:38:44.577657 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=bdb7d5b3] proposing REMOVE_REPLICA((n2,s2):85): updated=[(n1,s1):1 (n3,s3):3] next=86
[08:42:54][Step 2/2] I181018 08:38:44.609689 67209 storage/store.go:3640 [s2,r1/85:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:44.610253 67209 storage/store.go:3640 [s2,r1/85:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:44.610884 68282 storage/store.go:2580 [replicaGC,s2,r1/85:/M{in-ax}] removing replica r1/85
[08:42:54][Step 2/2] I181018 08:38:44.612148 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot c92656a5 at applied index 422
[08:42:54][Step 2/2] I181018 08:38:44.619401 68282 storage/replica.go:863 [replicaGC,s2,r1/85:/M{in-ax}] removed 48 (43+5) keys in 7ms [clear=7ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:44.624350 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 619, log entries: 76, rate-limit: 2.0 MiB/sec, 17ms
[08:42:54][Step 2/2] I181018 08:38:44.638417 68224 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 422 (id=c92656a5, encoded size=67266, 1 rocksdb batches, 76 log entries)
[08:42:54][Step 2/2] I181018 08:38:44.679665 68224 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 41ms [clear=8ms batch=0ms entries=29ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:44.683809 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):86): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=86, gen=0]
[08:42:54][Step 2/2] I181018 08:38:44.698242 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=cb7ae414] proposing ADD_REPLICA((n2,s2):86): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):86] next=87
[08:42:54][Step 2/2] I181018 08:38:44.716886 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):86): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):86, next=87, gen=0]
[08:42:54][Step 2/2] I181018 08:38:44.739307 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=7d73e03c] proposing REMOVE_REPLICA((n2,s2):86): updated=[(n1,s1):1 (n3,s3):3] next=87
[08:42:54][Step 2/2] I181018 08:38:44.749949 67209 storage/store.go:3640 [s2,r1/86:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:44.750437 67209 storage/store.go:3640 [s2,r1/86:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:44.757823 68313 storage/store.go:2580 [replicaGC,s2,r1/86:/M{in-ax}] removing replica r1/86
[08:42:54][Step 2/2] I181018 08:38:44.762224 68313 storage/replica.go:863 [replicaGC,s2,r1/86:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:44.763453 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 054a2de4 at applied index 428
[08:42:54][Step 2/2] I181018 08:38:44.770356 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 627, log entries: 82, rate-limit: 2.0 MiB/sec, 14ms
[08:42:54][Step 2/2] I181018 08:38:44.783895 68358 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 428 (id=054a2de4, encoded size=69569, 1 rocksdb batches, 82 log entries)
[08:42:54][Step 2/2] I181018 08:38:44.828635 68358 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 44ms [clear=9ms batch=0ms entries=32ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:44.830504 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:44.839337 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):87): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=87, gen=0]
[08:42:54][Step 2/2] I181018 08:38:44.867444 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b1d96fab] proposing ADD_REPLICA((n2,s2):87): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):87] next=88
[08:42:54][Step 2/2] I181018 08:38:44.881104 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):87): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):87, next=88, gen=0]
[08:42:54][Step 2/2] I181018 08:38:44.895423 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=de575fc0] proposing REMOVE_REPLICA((n2,s2):87): updated=[(n1,s1):1 (n3,s3):3] next=88
[08:42:54][Step 2/2] I181018 08:38:44.915500 67209 storage/store.go:3640 [s2,r1/87:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:44.915925 68373 storage/store.go:2580 [replicaGC,s2,r1/87:/M{in-ax}] removing replica r1/87
[08:42:54][Step 2/2] I181018 08:38:44.915992 67209 storage/store.go:3640 [s2,r1/87:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:44.916510 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 76e1e359 at applied index 433
[08:42:54][Step 2/2] I181018 08:38:44.923107 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 634, log entries: 87, rate-limit: 2.0 MiB/sec, 11ms
[08:42:54][Step 2/2] I181018 08:38:44.924424 68373 storage/replica.go:863 [replicaGC,s2,r1/87:/M{in-ax}] removed 48 (43+5) keys in 7ms [clear=7ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:44.936161 68362 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 433 (id=76e1e359, encoded size=71790, 1 rocksdb batches, 87 log entries)
[08:42:54][Step 2/2] I181018 08:38:44.972960 68362 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 36ms [clear=8ms batch=0ms entries=25ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:44.977817 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):88): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=88, gen=0]
[08:42:54][Step 2/2] I181018 08:38:44.995323 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0ebc5810] proposing ADD_REPLICA((n2,s2):88): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):88] next=89
[08:42:54][Step 2/2] I181018 08:38:45.005176 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):88): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):88, next=89, gen=0]
[08:42:54][Step 2/2] I181018 08:38:45.030864 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=4a70d39d] proposing REMOVE_REPLICA((n2,s2):88): updated=[(n1,s1):1 (n3,s3):3] next=89
[08:42:54][Step 2/2] I181018 08:38:45.047444 67209 storage/store.go:3640 [s2,r1/88:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:45.053802 67209 storage/store.go:3640 [s2,r1/88:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:45.050912 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 3a6f54c8 at applied index 437
[08:42:54][Step 2/2] I181018 08:38:45.053368 68284 storage/store.go:2580 [replicaGC,s2,r1/88:/M{in-ax}] removing replica r1/88
[08:42:54][Step 2/2] I181018 08:38:45.062188 68284 storage/replica.go:863 [replicaGC,s2,r1/88:/M{in-ax}] removed 48 (43+5) keys in 6ms [clear=5ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:45.063550 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 640, log entries: 91, rate-limit: 2.0 MiB/sec, 15ms
[08:42:54][Step 2/2] I181018 08:38:45.077060 68306 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 437 (id=3a6f54c8, encoded size=73749, 1 rocksdb batches, 91 log entries)
[08:42:54][Step 2/2] I181018 08:38:45.119048 68306 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 42ms [clear=8ms batch=0ms entries=29ms commit=3ms]
[08:42:54][Step 2/2] I181018 08:38:45.123622 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):89): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=89, gen=0]
[08:42:54][Step 2/2] I181018 08:38:45.126986 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:45.137470 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b1cdf23d] proposing ADD_REPLICA((n2,s2):89): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):89] next=90
[08:42:54][Step 2/2] I181018 08:38:45.158022 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):89): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):89, next=90, gen=0]
[08:42:54][Step 2/2] I181018 08:38:45.180713 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=479f99ea] proposing REMOVE_REPLICA((n2,s2):89): updated=[(n1,s1):1 (n3,s3):3] next=90
[08:42:54][Step 2/2] I181018 08:38:45.193422 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot b4b4e84c at applied index 443
[08:42:54][Step 2/2] I181018 08:38:45.197675 68392 storage/store.go:2580 [replicaGC,s2,r1/89:/M{in-ax}] removing replica r1/89
[08:42:54][Step 2/2] I181018 08:38:45.202377 68392 storage/replica.go:863 [replicaGC,s2,r1/89:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:45.208903 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 648, log entries: 97, rate-limit: 2.0 MiB/sec, 19ms
[08:42:54][Step 2/2] I181018 08:38:45.221989 68366 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 443 (id=b4b4e84c, encoded size=76052, 1 rocksdb batches, 97 log entries)
[08:42:54][Step 2/2] I181018 08:38:45.267454 68366 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 45ms [clear=8ms batch=0ms entries=33ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:45.273029 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):90): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=90, gen=0]
[08:42:54][Step 2/2] I181018 08:38:45.289258 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=575082b2] proposing ADD_REPLICA((n2,s2):90): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):90] next=91
[08:42:54][Step 2/2] I181018 08:38:45.314516 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):90): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):90, next=91, gen=0]
[08:42:54][Step 2/2] I181018 08:38:45.332202 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=29272609] proposing REMOVE_REPLICA((n2,s2):90): updated=[(n1,s1):1 (n3,s3):3] next=91
[08:42:54][Step 2/2] I181018 08:38:45.365953 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 7b7dd10e at applied index 448
[08:42:54][Step 2/2] I181018 08:38:45.367327 67209 storage/store.go:3640 [s2,r1/90:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:45.371301 68395 storage/store.go:2580 [replicaGC,s2,r1/90:/M{in-ax}] removing replica r1/90
[08:42:54][Step 2/2] I181018 08:38:45.379901 68395 storage/replica.go:863 [replicaGC,s2,r1/90:/M{in-ax}] removed 48 (43+5) keys in 7ms [clear=7ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:45.384133 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:45.387023 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 655, log entries: 102, rate-limit: 2.0 MiB/sec, 26ms
[08:42:54][Step 2/2] I181018 08:38:45.399674 68353 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 448 (id=7b7dd10e, encoded size=78183, 1 rocksdb batches, 102 log entries)
[08:42:54][Step 2/2] I181018 08:38:45.447511 68353 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 48ms [clear=6ms batch=0ms entries=38ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:45.452059 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):91): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=91, gen=0]
[08:42:54][Step 2/2] I181018 08:38:45.466410 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=001a1417] proposing ADD_REPLICA((n2,s2):91): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):91] next=92
[08:42:54][Step 2/2] I181018 08:38:45.490911 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):91): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):91, next=92, gen=0]
[08:42:54][Step 2/2] I181018 08:38:45.506732 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=4ff319b3] proposing REMOVE_REPLICA((n2,s2):91): updated=[(n1,s1):1 (n3,s3):3] next=92
[08:42:54][Step 2/2] I181018 08:38:45.517245 67209 storage/store.go:3640 [s2,r1/91:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:45.519555 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 6defdfba at applied index 452
[08:42:54][Step 2/2] I181018 08:38:45.520827 68368 storage/store.go:2580 [replicaGC,s2,r1/91:/M{in-ax}] removing replica r1/91
[08:42:54][Step 2/2] I181018 08:38:45.519569 67209 storage/store.go:3640 [s2,r1/91:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:45.532737 68368 storage/replica.go:863 [replicaGC,s2,r1/91:/M{in-ax}] removed 48 (43+5) keys in 11ms [clear=11ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:45.534270 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 661, log entries: 106, rate-limit: 2.0 MiB/sec, 17ms
[08:42:54][Step 2/2] I181018 08:38:45.547878 68451 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 452 (id=6defdfba, encoded size=80142, 1 rocksdb batches, 106 log entries)
[08:42:54][Step 2/2] I181018 08:38:45.606198 68451 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 58ms [clear=20ms batch=0ms entries=34ms commit=3ms]
[08:42:54][Step 2/2] I181018 08:38:45.609743 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):92): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=92, gen=0]
[08:42:54][Step 2/2] I181018 08:38:45.621234 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=af21f27f] proposing ADD_REPLICA((n2,s2):92): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):92] next=93
[08:42:54][Step 2/2] I181018 08:38:45.632898 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):92): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):92, next=93, gen=0]
[08:42:54][Step 2/2] I181018 08:38:45.647912 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=d70c8ee8] proposing REMOVE_REPLICA((n2,s2):92): updated=[(n1,s1):1 (n3,s3):3] next=93
[08:42:54][Step 2/2] I181018 08:38:45.664777 68467 storage/store.go:2580 [replicaGC,s2,r1/92:/M{in-ax}] removing replica r1/92
[08:42:54][Step 2/2] I181018 08:38:45.668953 68467 storage/replica.go:863 [replicaGC,s2,r1/92:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:45.672983 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 556afc78 at applied index 458
[08:42:54][Step 2/2] I181018 08:38:45.680893 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:45.682250 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 669, log entries: 112, rate-limit: 2.0 MiB/sec, 12ms
[08:42:54][Step 2/2] I181018 08:38:45.703990 68427 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 458 (id=556afc78, encoded size=82445, 1 rocksdb batches, 112 log entries)
[08:42:54][Step 2/2] I181018 08:38:45.744987 68427 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 41ms [clear=6ms batch=0ms entries=31ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:45.748224 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):93): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=93, gen=0]
[08:42:54][Step 2/2] I181018 08:38:45.767655 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=3476deb5] proposing ADD_REPLICA((n2,s2):93): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):93] next=94
[08:42:54][Step 2/2] I181018 08:38:45.783279 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):93): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):93, next=94, gen=0]
[08:42:54][Step 2/2] I181018 08:38:45.800109 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=cd208095] proposing REMOVE_REPLICA((n2,s2):93): updated=[(n1,s1):1 (n3,s3):3] next=94
[08:42:54][Step 2/2] I181018 08:38:45.815881 67209 storage/store.go:3640 [s2,r1/93:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:45.816902 68398 storage/store.go:2580 [replicaGC,s2,r1/93:/M{in-ax}] removing replica r1/93
[08:42:54][Step 2/2] I181018 08:38:45.821154 68398 storage/replica.go:863 [replicaGC,s2,r1/93:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:45.821363 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 113f966d at applied index 463
[08:42:54][Step 2/2] I181018 08:38:45.830892 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 676, log entries: 117, rate-limit: 2.0 MiB/sec, 14ms
[08:42:54][Step 2/2] I181018 08:38:45.844093 68354 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 463 (id=113f966d, encoded size=84576, 1 rocksdb batches, 117 log entries)
[08:42:54][Step 2/2] I181018 08:38:45.884208 68354 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 40ms [clear=7ms batch=0ms entries=29ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:45.889069 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):94): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=94, gen=0]
[08:42:54][Step 2/2] I181018 08:38:45.900375 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=2815baca] proposing ADD_REPLICA((n2,s2):94): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):94] next=95
[08:42:54][Step 2/2] I181018 08:38:45.910951 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):94): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):94, next=95, gen=0]
[08:42:54][Step 2/2] I181018 08:38:45.929198 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=a88870d6] proposing REMOVE_REPLICA((n2,s2):94): updated=[(n1,s1):1 (n3,s3):3] next=95
[08:42:54][Step 2/2] I181018 08:38:45.959740 67209 storage/store.go:3640 [s2,r1/94:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:45.961334 67209 storage/store.go:3640 [s2,r1/94:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:45.963030 68441 storage/store.go:2580 [replicaGC,s2,r1/94:/M{in-ax}] removing replica r1/94
[08:42:54][Step 2/2] I181018 08:38:45.963683 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 2d3b9a2b at applied index 467
[08:42:54][Step 2/2] I181018 08:38:45.971509 68441 storage/replica.go:863 [replicaGC,s2,r1/94:/M{in-ax}] removed 48 (43+5) keys in 6ms [clear=6ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:45.976752 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 682, log entries: 121, rate-limit: 2.0 MiB/sec, 16ms
[08:42:54][Step 2/2] I181018 08:38:45.990404 68412 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 467 (id=2d3b9a2b, encoded size=86535, 1 rocksdb batches, 121 log entries)
[08:42:54][Step 2/2] I181018 08:38:46.039619 68412 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 49ms [clear=10ms batch=0ms entries=36ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:38:46.041279 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:46.043755 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):95): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=95, gen=0]
[08:42:54][Step 2/2] I181018 08:38:46.064165 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=77d61fa6] proposing ADD_REPLICA((n2,s2):95): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):95] next=96
[08:42:54][Step 2/2] I181018 08:38:46.079183 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):95): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):95, next=96, gen=0]
[08:42:54][Step 2/2] I181018 08:38:46.094403 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=28f84e0f] proposing REMOVE_REPLICA((n2,s2):95): updated=[(n1,s1):1 (n3,s3):3] next=96
[08:42:54][Step 2/2] I181018 08:38:46.108009 67209 storage/store.go:3640 [s2,r1/95:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:46.108444 67209 storage/store.go:3640 [s2,r1/95:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:46.109137 68499 storage/store.go:2580 [replicaGC,s2,r1/95:/M{in-ax}] removing replica r1/95
[08:42:54][Step 2/2] I181018 08:38:46.113215 68499 storage/replica.go:863 [replicaGC,s2,r1/95:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:46.113670 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 3b3ad2af at applied index 473
[08:42:54][Step 2/2] I181018 08:38:46.122574 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 690, log entries: 127, rate-limit: 2.0 MiB/sec, 13ms
[08:42:54][Step 2/2] I181018 08:38:46.135453 68445 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 473 (id=3b3ad2af, encoded size=88928, 1 rocksdb batches, 127 log entries)
[08:42:54][Step 2/2] I181018 08:38:46.184792 68445 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 49ms [clear=9ms batch=0ms entries=36ms commit=3ms]
[08:42:54][Step 2/2] I181018 08:38:46.193831 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):96): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=96, gen=0]
[08:42:54][Step 2/2] I181018 08:38:46.214702 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b124f893] proposing ADD_REPLICA((n2,s2):96): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):96] next=97
[08:42:54][Step 2/2] I181018 08:38:46.227505 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):96): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):96, next=97, gen=0]
[08:42:54][Step 2/2] I181018 08:38:46.241287 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=db3e47af] proposing REMOVE_REPLICA((n2,s2):96): updated=[(n1,s1):1 (n3,s3):3] next=97
[08:42:54][Step 2/2] I181018 08:38:46.254527 67209 storage/store.go:3640 [s2,r1/96:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:46.256834 68379 storage/store.go:2580 [replicaGC,s2,r1/96:/M{in-ax}] removing replica r1/96
[08:42:54][Step 2/2] I181018 08:38:46.259334 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 0c1565b3 at applied index 478
[08:42:54][Step 2/2] I181018 08:38:46.261990 68379 storage/replica.go:863 [replicaGC,s2,r1/96:/M{in-ax}] removed 48 (43+5) keys in 4ms [clear=4ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:46.268412 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 697, log entries: 132, rate-limit: 2.0 MiB/sec, 12ms
[08:42:54][Step 2/2] I181018 08:38:46.284555 68515 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 478 (id=0c1565b3, encoded size=91059, 1 rocksdb batches, 132 log entries)
[08:42:54][Step 2/2] I181018 08:38:46.334329 68515 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 49ms [clear=10ms batch=0ms entries=36ms commit=3ms]
[08:42:54][Step 2/2] I181018 08:38:46.336734 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:46.340428 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):97): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=97, gen=0]
[08:42:54][Step 2/2] I181018 08:38:46.359000 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0ef8006f] proposing ADD_REPLICA((n2,s2):97): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):97] next=98
[08:42:54][Step 2/2] I181018 08:38:46.370105 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):97): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):97, next=98, gen=0]
[08:42:54][Step 2/2] I181018 08:38:46.387772 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=9a53a83d] proposing REMOVE_REPLICA((n2,s2):97): updated=[(n1,s1):1 (n3,s3):3] next=98
[08:42:54][Step 2/2] I181018 08:38:46.401819 67209 storage/store.go:3640 [s2,r1/97:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:46.403018 68430 storage/store.go:2580 [replicaGC,s2,r1/97:/M{in-ax}] removing replica r1/97
[08:42:54][Step 2/2] I181018 08:38:46.404300 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 6b14340b at applied index 482
[08:42:54][Step 2/2] I181018 08:38:46.412908 68430 storage/replica.go:863 [replicaGC,s2,r1/97:/M{in-ax}] removed 48 (43+5) keys in 9ms [clear=8ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:46.412945 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 703, log entries: 136, rate-limit: 2.0 MiB/sec, 12ms
[08:42:54][Step 2/2] I181018 08:38:46.424275 68447 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 482 (id=6b14340b, encoded size=93108, 1 rocksdb batches, 136 log entries)
[08:42:54][Step 2/2] I181018 08:38:46.484134 68447 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 60ms [clear=10ms batch=0ms entries=46ms commit=3ms]
[08:42:54][Step 2/2] I181018 08:38:46.487997 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):98): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=98, gen=0]
[08:42:54][Step 2/2] I181018 08:38:46.502055 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=2fe981a2] proposing ADD_REPLICA((n2,s2):98): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):98] next=99
[08:42:54][Step 2/2] I181018 08:38:46.515849 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):98): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):98, next=99, gen=0]
[08:42:54][Step 2/2] I181018 08:38:46.538874 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=2b5e8781] proposing REMOVE_REPLICA((n2,s2):98): updated=[(n1,s1):1 (n3,s3):3] next=99
[08:42:54][Step 2/2] I181018 08:38:46.555035 67209 storage/store.go:3640 [s2,r1/98:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:46.557717 68491 storage/store.go:2580 [replicaGC,s2,r1/98:/M{in-ax}] removing replica r1/98
[08:42:54][Step 2/2] I181018 08:38:46.559757 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 406acf95 at applied index 488
[08:42:54][Step 2/2] I181018 08:38:46.564216 68491 storage/replica.go:863 [replicaGC,s2,r1/98:/M{in-ax}] removed 48 (43+5) keys in 5ms [clear=5ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:46.570615 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 711, log entries: 142, rate-limit: 2.0 MiB/sec, 14ms
[08:42:54][Step 2/2] I181018 08:38:46.588767 68504 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 488 (id=406acf95, encoded size=95411, 1 rocksdb batches, 142 log entries)
[08:42:54][Step 2/2] I181018 08:38:46.657715 68504 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 69ms [clear=12ms batch=0ms entries=52ms commit=3ms]
[08:42:54][Step 2/2] I181018 08:38:46.659457 67058 storage/store.go:3659 [s1,r1/1:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:46.664027 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):99): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=99, gen=0]
[08:42:54][Step 2/2] I181018 08:38:46.683662 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=f05ef8d9] proposing ADD_REPLICA((n2,s2):99): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):99] next=100
[08:42:54][Step 2/2] I181018 08:38:46.693950 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):99): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):99, next=100, gen=0]
[08:42:54][Step 2/2] I181018 08:38:46.717881 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6b6a4c4c] proposing REMOVE_REPLICA((n2,s2):99): updated=[(n1,s1):1 (n3,s3):3] next=100
[08:42:54][Step 2/2] I181018 08:38:46.734092 67209 storage/store.go:3640 [s2,r1/99:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:46.734909 68532 storage/store.go:2580 [replicaGC,s2,r1/99:/M{in-ax}] removing replica r1/99
[08:42:54][Step 2/2] I181018 08:38:46.735365 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 5e952872 at applied index 493
[08:42:54][Step 2/2] I181018 08:38:46.743351 68532 storage/replica.go:863 [replicaGC,s2,r1/99:/M{in-ax}] removed 48 (43+5) keys in 7ms [clear=7ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:46.747147 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 718, log entries: 147, rate-limit: 2.0 MiB/sec, 14ms
[08:42:54][Step 2/2] I181018 08:38:46.759653 68534 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 493 (id=5e952872, encoded size=97632, 1 rocksdb batches, 147 log entries)
[08:42:54][Step 2/2] I181018 08:38:46.814627 68534 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 55ms [clear=9ms batch=0ms entries=42ms commit=3ms]
[08:42:54][Step 2/2] I181018 08:38:46.819463 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):100): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=100, gen=0]
[08:42:54][Step 2/2] I181018 08:38:46.832424 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=603df015] proposing ADD_REPLICA((n2,s2):100): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):100] next=101
[08:42:54][Step 2/2] I181018 08:38:46.843197 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):100): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):100, next=101, gen=0]
[08:42:54][Step 2/2] I181018 08:38:46.858186 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0b10af5d] proposing REMOVE_REPLICA((n2,s2):100): updated=[(n1,s1):1 (n3,s3):3] next=101
[08:42:54][Step 2/2] I181018 08:38:46.872846 68475 storage/store.go:2580 [replicaGC,s2,r1/100:/M{in-ax}] removing replica r1/100
[08:42:54][Step 2/2] I181018 08:38:46.873354 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot fff062dc at applied index 497
[08:42:54][Step 2/2] I181018 08:38:46.874956 67209 storage/store.go:3640 [s2,r1/100:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:46.880298 68475 storage/replica.go:863 [replicaGC,s2,r1/100:/M{in-ax}] removed 48 (43+5) keys in 6ms [clear=6ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:46.884218 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 724, log entries: 151, rate-limit: 2.0 MiB/sec, 14ms
[08:42:54][Step 2/2] I181018 08:38:46.898256 68536 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 497 (id=fff062dc, encoded size=99591, 1 rocksdb batches, 151 log entries)
[08:42:54][Step 2/2] I181018 08:38:46.965408 68536 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 67ms [clear=10ms batch=0ms entries=53ms commit=3ms]
[08:42:54][Step 2/2] I181018 08:38:46.969010 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):101): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=101, gen=0]
[08:42:54][Step 2/2] I181018 08:38:46.979201 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=9f4df572] proposing ADD_REPLICA((n2,s2):101): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):101] next=102
[08:42:54][Step 2/2] I181018 08:38:46.989947 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):101): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):101, next=102, gen=0]
[08:42:54][Step 2/2] I181018 08:38:47.021850 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=ceba29cf] proposing REMOVE_REPLICA((n2,s2):101): updated=[(n1,s1):1 (n3,s3):3] next=102
[08:42:54][Step 2/2] I181018 08:38:47.032719 67209 storage/store.go:3640 [s2,r1/101:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:47.033350 67209 storage/store.go:3640 [s2,r1/101:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:47.036998 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 37e0a794 at applied index 503
[08:42:54][Step 2/2] I181018 08:38:47.044023 68566 storage/store.go:2580 [replicaGC,s2,r1/101:/M{in-ax}] removing replica r1/101
[08:42:54][Step 2/2] I181018 08:38:47.046185 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 732, log entries: 157, rate-limit: 2.0 MiB/sec, 12ms
[08:42:54][Step 2/2] I181018 08:38:47.048425 68566 storage/replica.go:863 [replicaGC,s2,r1/101:/M{in-ax}] removed 48 (43+5) keys in 3ms [clear=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:47.063282 68508 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 503 (id=37e0a794, encoded size=101894, 1 rocksdb batches, 157 log entries)
[08:42:54][Step 2/2] I181018 08:38:47.128036 68508 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 65ms [clear=11ms batch=0ms entries=50ms commit=3ms]
[08:42:54][Step 2/2] I181018 08:38:47.132924 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):102): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=102, gen=0]
[08:42:54][Step 2/2] I181018 08:38:47.147438 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=08ccf0d7] proposing ADD_REPLICA((n2,s2):102): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):102] next=103
[08:42:54][Step 2/2] I181018 08:38:47.170205 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):102): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, (n2,s2):102, next=103, gen=0]
[08:42:54][Step 2/2] I181018 08:38:47.183904 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=18c3a327] proposing REMOVE_REPLICA((n2,s2):102): updated=[(n1,s1):1 (n3,s3):3] next=103
[08:42:54][Step 2/2] I181018 08:38:47.193278 67209 storage/store.go:3640 [s2,r1/102:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:47.194343 67209 storage/store.go:3640 [s2,r1/102:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:47.197983 66799 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 5a44ebe7 at applied index 508
[08:42:54][Step 2/2] I181018 08:38:47.197989 68509 storage/store.go:2580 [replicaGC,s2,r1/102:/M{in-ax}] removing replica r1/102
[08:42:54][Step 2/2] I181018 08:38:47.207631 68509 storage/replica.go:863 [replicaGC,s2,r1/102:/M{in-ax}] removed 48 (43+5) keys in 8ms [clear=8ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:47.211955 66799 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 739, log entries: 162, rate-limit: 2.0 MiB/sec, 17ms
[08:42:54][Step 2/2] I181018 08:38:47.227431 68541 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 508 (id=5a44ebe7, encoded size=104025, 1 rocksdb batches, 162 log entries)
[08:42:54][Step 2/2] I181018 08:38:47.291821 68541 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 64ms [clear=11ms batch=0ms entries=49ms commit=3ms]
[08:42:54][Step 2/2] I181018 08:38:47.297505 66799 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):103): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n3,s3):3, next=103, gen=0]
[08:42:54][Step 2/2] I181018 08:38:47.309715 66799 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=77c11e33] proposing ADD_REPLICA((n2,s2):103): updated=[(n1,s1):1 (n3,s3):3 (n2,s2):103] next=104
[08:42:54][Step 2/2] I181018 08:38:47.319087 68572 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] storage.Replica: gossipping first range
[08:42:54][Step 2/2] 1 [async] kv.TxnCoordSender: heartbeat loop
[08:42:54][Step 2/2] I181018 08:38:47.322381 68570 storage/replica_proposal.go:771 [s1,r1/1:/M{in-ax},txn=77c11e33] unable to gossip first range; hasLease=false, err=node unavailable; try another peer
[08:42:54][Step 2/2] I181018 08:38:47.322417 68572 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] storage.Replica: gossipping first range
[08:42:54][Step 2/2] W181018 08:38:47.369020 66778 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] --- PASS: TestRemovePlaceholderRace (13.61s)
[08:42:54][Step 2/2] === RUN TestReplicaGCRace
[08:42:54][Step 2/2] I181018 08:38:47.476443 68599 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:34099" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:47.530499 68599 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:47.531505 68599 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:33595" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:47.533116 68797 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:34099
[08:42:54][Step 2/2] W181018 08:38:47.585659 68599 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:47.586744 68599 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:35109" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:47.590072 68923 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:34099
[08:42:54][Step 2/2] I181018 08:38:47.613161 68599 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot c8983755 at applied index 16
[08:42:54][Step 2/2] I181018 08:38:47.614875 68599 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:47.616959 68934 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=c8983755, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:47.620543 68934 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:47.624064 68599 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:47.632416 68599 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=f554a4e0] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:47.712530 68599 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 7ebc08ba at applied index 18
[08:42:54][Step 2/2] I181018 08:38:47.714460 68599 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:47.716981 68935 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=7ebc08ba, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:47.720751 68935 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:47.724985 68599 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:47.743628 68599 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=65cf86ce] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:47.894369 68599 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:47.907419 68599 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=9cfa2f4f] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2] next=4
[08:42:54][Step 2/2] I181018 08:38:47.916410 68599 storage/store.go:2580 [s3,r1/?:/M{in-ax}] removing replica r1/3
[08:42:54][Step 2/2] I181018 08:38:47.917572 68599 storage/replica.go:863 [s3,r1/?:/M{in-ax}] removed 49 (43+6) keys in 1ms [clear=0ms commit=0ms]
[08:42:54][Step 2/2] --- PASS: TestReplicaGCRace (0.59s)
[08:42:54][Step 2/2] === RUN TestStoreRangeMoveDecommissioning
[08:42:54][Step 2/2] I181018 08:38:48.057977 68938 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41385" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:48.113200 68938 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:48.114302 68938 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:41585" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:48.118596 69204 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:41385
[08:42:54][Step 2/2] W181018 08:38:48.187370 68938 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:48.188719 68938 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:42007" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:48.190293 69303 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:41385
[08:42:54][Step 2/2] W181018 08:38:48.243590 68938 gossip/gossip.go:1496 [n4] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:48.248211 69218 gossip/client.go:129 [n4] started gossip client to 127.0.0.1:41385
[08:42:54][Step 2/2] I181018 08:38:48.252327 68938 gossip/gossip.go:393 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:44497" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:48.304707 68938 gossip/gossip.go:1496 [n5] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:48.307228 68938 gossip/gossip.go:393 [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:40969" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:48.309256 69552 gossip/client.go:129 [n5] started gossip client to 127.0.0.1:41385
[08:42:54][Step 2/2] I181018 08:38:48.321657 69439 gossip/server.go:282 [n1] refusing gossip from n5 (max 3 conns); forwarding to n3 ({tcp 127.0.0.1:42007})
[08:42:54][Step 2/2] I181018 08:38:48.326746 69552 gossip/client.go:134 [n5] closing client to n1 (127.0.0.1:41385): received forward from n1 to 3 (127.0.0.1:42007)
[08:42:54][Step 2/2] I181018 08:38:48.328852 69436 gossip/gossip.go:1510 [n5] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:38:48.330719 69557 gossip/client.go:129 [n5] started gossip client to 127.0.0.1:42007
[08:42:54][Step 2/2] W181018 08:38:48.417270 68938 gossip/gossip.go:1496 [n6] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:48.418048 68938 gossip/gossip.go:393 [n6] NodeDescriptor set to node_id:6 address:<network_field:"tcp" address_field:"127.0.0.1:44169" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:48.420150 68974 gossip/client.go:129 [n6] started gossip client to 127.0.0.1:41385
[08:42:54][Step 2/2] I181018 08:38:48.422090 69103 gossip/server.go:282 [n1] refusing gossip from n6 (max 3 conns); forwarding to n4 ({tcp 127.0.0.1:44497})
[08:42:54][Step 2/2] I181018 08:38:48.426906 68974 gossip/client.go:134 [n6] closing client to n1 (127.0.0.1:41385): received forward from n1 to 4 (127.0.0.1:44497)
[08:42:54][Step 2/2] I181018 08:38:48.428422 69580 gossip/gossip.go:1510 [n6] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:38:48.430022 69104 gossip/client.go:129 [n6] started gossip client to 127.0.0.1:44497
[08:42:54][Step 2/2] I181018 08:38:48.490439 68938 storage/client_test.go:421 gossip network initialized
[08:42:54][Step 2/2] I181018 08:38:48.495471 68938 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 08d43cc2 at applied index 19
[08:42:54][Step 2/2] I181018 08:38:48.497048 68938 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 52, log entries: 9, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:48.499127 69584 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 19 (id=08d43cc2, encoded size=8866, 1 rocksdb batches, 9 log entries)
[08:42:54][Step 2/2] I181018 08:38:48.502766 69584 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:48.506331 68938 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:48.514289 68938 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=542592cd] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:48.528383 68938 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 2915f5df at applied index 21
[08:42:54][Step 2/2] I181018 08:38:48.529734 68938 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 55, log entries: 11, rate-limit: 2.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:38:48.531927 68978 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 21 (id=2915f5df, encoded size=9808, 1 rocksdb batches, 11 log entries)
[08:42:54][Step 2/2] I181018 08:38:48.538711 68978 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=5ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:48.542787 68938 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:48.556490 68938 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=9eff5af0] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:48.647386 68938 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot c18c2871 at applied index 26
[08:42:54][Step 2/2] I181018 08:38:48.649255 68938 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n4,s4):?: kv pairs: 61, log entries: 16, rate-limit: 8.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:48.651141 69723 storage/replica_raftstorage.go:804 [s4,r1/?:{-}] applying preemptive snapshot at index 26 (id=c18c2871, encoded size=11333, 1 rocksdb batches, 16 log entries)
[08:42:54][Step 2/2] I181018 08:38:48.659272 69723 storage/replica_raftstorage.go:810 [s4,r1/?:/M{in-ax}] applied preemptive snapshot in 8ms [clear=0ms batch=0ms entries=6ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:48.663642 68938 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n4,s4):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:48.681995 68938 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=7e1a47de] proposing ADD_REPLICA((n4,s4):4): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3 (n4,s4):4] next=5
[08:42:54][Step 2/2] I181018 08:38:48.704213 68938 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot fe40ffc3 at applied index 29
[08:42:54][Step 2/2] I181018 08:38:48.708771 68938 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n5,s5):?: kv pairs: 65, log entries: 19, rate-limit: 8.0 MiB/sec, 8ms
[08:42:54][Step 2/2] I181018 08:38:48.710876 69689 storage/replica_raftstorage.go:804 [s5,r1/?:{-}] applying preemptive snapshot at index 29 (id=fe40ffc3, encoded size=12580, 1 rocksdb batches, 19 log entries)
[08:42:54][Step 2/2] I181018 08:38:48.717887 69689 storage/replica_raftstorage.go:810 [s5,r1/?:/M{in-ax}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=5ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:48.728302 68938 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n5,s5):5): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, (n4,s4):4, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:38:48.750330 68938 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=bfd5188e] proposing ADD_REPLICA((n5,s5):5): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3 (n4,s4):4 (n5,s5):5] next=6
[08:42:54][Step 2/2] I181018 08:38:48.778429 68938 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot 5bc4e082 at applied index 32
[08:42:54][Step 2/2] I181018 08:38:48.780440 68938 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n6,s6):?: kv pairs: 69, log entries: 22, rate-limit: 8.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:48.782790 69764 storage/replica_raftstorage.go:804 [s6,r1/?:{-}] applying preemptive snapshot at index 32 (id=5bc4e082, encoded size=13886, 1 rocksdb batches, 22 log entries)
[08:42:54][Step 2/2] I181018 08:38:48.800627 69764 storage/replica_raftstorage.go:810 [s6,r1/?:/M{in-ax}] applied preemptive snapshot in 15ms [clear=0ms batch=5ms entries=9ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:48.817825 68938 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n6,s6):6): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, (n4,s4):4, (n5,s5):5, next=6, gen=0]
[08:42:54][Step 2/2] I181018 08:38:48.840814 68938 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=59f52958] proposing ADD_REPLICA((n6,s6):6): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3 (n4,s4):4 (n5,s5):5 (n6,s6):6] next=7
[08:42:54][Step 2/2] I181018 08:38:48.863825 68938 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, (n4,s4):4, (n5,s5):5, (n6,s6):6, next=7, gen=0]
[08:42:54][Step 2/2] I181018 08:38:48.893644 68938 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=c0befd8c] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n6,s6):6 (n4,s4):4 (n5,s5):5] next=7
[08:42:54][Step 2/2] I181018 08:38:48.911687 69674 storage/store.go:3640 [s3,r1/3:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:48.935409 69780 storage/store.go:2580 [replicaGC,s3,r1/3:/M{in-ax}] removing replica r1/3
[08:42:54][Step 2/2] I181018 08:38:48.940744 69769 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] storage.replicaGC: processing replica
[08:42:54][Step 2/2] I181018 08:38:48.942955 69780 storage/replica.go:863 [replicaGC,s3,r1/3:/M{in-ax}] removed 51 (46+5) keys in 6ms [clear=1ms commit=5ms]
[08:42:54][Step 2/2] W181018 08:38:48.951407 69675 storage/raft_transport.go:282 unable to accept Raft message from (n2,s2):2: no handler registered for (n1,s1):1
[08:42:54][Step 2/2] W181018 08:38:48.954257 69675 storage/raft_transport.go:282 unable to accept Raft message from (n4,s4):4: no handler registered for (n1,s1):1
[08:42:54][Step 2/2] W181018 08:38:48.954943 69675 storage/raft_transport.go:282 unable to accept Raft message from (n5,s5):5: no handler registered for (n1,s1):1
[08:42:54][Step 2/2] W181018 08:38:48.956967 69674 storage/store.go:3662 [s2,r1/2:/M{in-ax}] raft error: node 1 claims to not contain store 1 for replica (n1,s1):1: store 1 was not found
[08:42:54][Step 2/2] W181018 08:38:48.957282 69672 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: store 1 was not found:
[08:42:54][Step 2/2] W181018 08:38:48.976700 69533 storage/store.go:1490 [s5,r1/5:/M{in-ax}] could not gossip first range descriptor: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:49.007405 69765 storage/raft_transport.go:584 while processing outgoing Raft queue to node 6: EOF:
[08:42:54][Step 2/2] --- PASS: TestStoreRangeMoveDecommissioning (1.04s)
[08:42:54][Step 2/2] === RUN TestStoreRangeRemoveDead
[08:42:54][Step 2/2] I181018 08:38:49.105182 69736 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:35703" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:49.164756 69736 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:49.165949 69736 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:43163" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:49.169072 69693 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:35703
[08:42:54][Step 2/2] W181018 08:38:49.230917 69736 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:49.231990 69736 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:34233" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:49.235287 69787 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:35703
[08:42:54][Step 2/2] W181018 08:38:49.293946 69736 gossip/gossip.go:1496 [n4] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:49.295318 69736 gossip/gossip.go:393 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:35047" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:49.298632 69881 gossip/client.go:129 [n4] started gossip client to 127.0.0.1:35703
[08:42:54][Step 2/2] W181018 08:38:49.365758 69736 gossip/gossip.go:1496 [n5] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:49.366927 69736 gossip/gossip.go:393 [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:42997" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:49.371949 69792 gossip/client.go:129 [n5] started gossip client to 127.0.0.1:35703
[08:42:54][Step 2/2] I181018 08:38:49.372940 70014 gossip/server.go:282 [n1] refusing gossip from n5 (max 3 conns); forwarding to n2 ({tcp 127.0.0.1:43163})
[08:42:54][Step 2/2] I181018 08:38:49.380273 69792 gossip/client.go:134 [n5] closing client to n1 (127.0.0.1:35703): received forward from n1 to 2 (127.0.0.1:43163)
[08:42:54][Step 2/2] I181018 08:38:49.383803 70350 gossip/gossip.go:1510 [n5] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:38:49.384737 69793 gossip/client.go:129 [n5] started gossip client to 127.0.0.1:43163
[08:42:54][Step 2/2] W181018 08:38:49.430345 69736 gossip/gossip.go:1496 [n6] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:49.431545 69736 gossip/gossip.go:393 [n6] NodeDescriptor set to node_id:6 address:<network_field:"tcp" address_field:"127.0.0.1:34457" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:49.433436 70452 gossip/client.go:129 [n6] started gossip client to 127.0.0.1:35703
[08:42:54][Step 2/2] I181018 08:38:49.434675 70018 gossip/server.go:282 [n1] refusing gossip from n6 (max 3 conns); forwarding to n3 ({tcp 127.0.0.1:34233})
[08:42:54][Step 2/2] I181018 08:38:49.439327 70452 gossip/client.go:134 [n6] closing client to n1 (127.0.0.1:35703): received forward from n1 to 3 (127.0.0.1:34233)
[08:42:54][Step 2/2] I181018 08:38:49.440280 70443 gossip/gossip.go:1510 [n6] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:38:49.441676 70467 gossip/client.go:129 [n6] started gossip client to 127.0.0.1:34233
[08:42:54][Step 2/2] I181018 08:38:49.500102 70454 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot 1e350b98 at applied index 19
[08:42:54][Step 2/2] I181018 08:38:49.502886 70454 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n4,s4):?: kv pairs: 52, log entries: 9, rate-limit: 8.0 MiB/sec, 23ms
[08:42:54][Step 2/2] I181018 08:38:49.505354 69886 storage/replica_raftstorage.go:804 [s4,r1/?:{-}] applying preemptive snapshot at index 19 (id=1e350b98, encoded size=8866, 1 rocksdb batches, 9 log entries)
[08:42:54][Step 2/2] I181018 08:38:49.509530 69886 storage/replica_raftstorage.go:810 [s4,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:49.513660 70454 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n4,s4):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:49.533191 70454 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=6dc746ac] proposing ADD_REPLICA((n4,s4):2): updated=[(n1,s1):1 (n4,s4):2] next=3
[08:42:54][Step 2/2] I181018 08:38:49.539391 70454 storage/queue.go:876 [replicate] purgatory is now empty
[08:42:54][Step 2/2] I181018 08:38:49.547341 70470 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot 21da97e9 at applied index 21
[08:42:54][Step 2/2] I181018 08:38:49.548711 70470 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n5,s5):?: kv pairs: 55, log entries: 11, rate-limit: 8.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:49.550800 70500 storage/replica_raftstorage.go:804 [s5,r1/?:{-}] applying preemptive snapshot at index 21 (id=21da97e9, encoded size=9916, 1 rocksdb batches, 11 log entries)
[08:42:54][Step 2/2] I181018 08:38:49.558376 70500 storage/replica_raftstorage.go:810 [s5,r1/?:/M{in-ax}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=4ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:49.563247 70470 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n5,s5):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n4,s4):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:49.593772 70470 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=533eeb3c] proposing ADD_REPLICA((n5,s5):3): updated=[(n1,s1):1 (n4,s4):2 (n5,s5):3] next=4
[08:42:54][Step 2/2] I181018 08:38:49.610284 70453 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot 3e54d662 at applied index 24
[08:42:54][Step 2/2] I181018 08:38:49.611988 70453 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n6,s6):?: kv pairs: 59, log entries: 14, rate-limit: 8.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:49.613639 70516 storage/replica_raftstorage.go:804 [s6,r1/?:{-}] applying preemptive snapshot at index 24 (id=3e54d662, encoded size=11228, 1 rocksdb batches, 14 log entries)
[08:42:54][Step 2/2] I181018 08:38:49.620299 70516 storage/replica_raftstorage.go:810 [s6,r1/?:/M{in-ax}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=4ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:49.626367 70453 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n6,s6):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n4,s4):2, (n5,s5):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:49.657430 70453 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=23729198] proposing ADD_REPLICA((n6,s6):4): updated=[(n1,s1):1 (n4,s4):2 (n5,s5):3 (n6,s6):4] next=5
[08:42:54][Step 2/2] I181018 08:38:49.686880 70453 storage/store_snapshot.go:621 [replicate,s1,r1/1:/M{in-ax}] sending preemptive snapshot a5c29de6 at applied index 27
[08:42:54][Step 2/2] I181018 08:38:49.689265 70453 storage/store_snapshot.go:664 [replicate,s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 63, log entries: 17, rate-limit: 8.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:49.690886 70518 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 27 (id=a5c29de6, encoded size=12515, 1 rocksdb batches, 17 log entries)
[08:42:54][Step 2/2] I181018 08:38:49.703794 70518 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 13ms [clear=0ms batch=0ms entries=9ms commit=3ms]
[08:42:54][Step 2/2] I181018 08:38:49.714206 70453 storage/replica_command.go:816 [replicate,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):5): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n4,s4):2, (n5,s5):3, (n6,s6):4, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:38:49.746668 70453 storage/replica.go:3884 [replicate,s1,r1/1:/M{in-ax},txn=54b951c1] proposing ADD_REPLICA((n3,s3):5): updated=[(n1,s1):1 (n4,s4):2 (n5,s5):3 (n6,s6):4 (n3,s3):5] next=6
[08:42:54][Step 2/2] W181018 08:38:49.971301 70639 internal/client/txn.go:532 [hb] failure aborting transaction: node unavailable; try another peer; abort caused by: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:49.971818 70639 storage/node_liveness.go:454 [hb] failed node liveness heartbeat: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:49.992275 70258 storage/raft_transport.go:282 unable to accept Raft message from (n4,s4):?: no handler registered for (n1,s1):?
[08:42:54][Step 2/2] W181018 08:38:50.004555 70488 storage/store.go:3662 [s4] raft error: node 1 claims to not contain store 1 for replica (n1,s1):?: store 1 was not found
[08:42:54][Step 2/2] W181018 08:38:50.005012 70256 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: store 1 was not found:
[08:42:54][Step 2/2] W181018 08:38:50.030878 70335 storage/store.go:1490 [s5,r1/3:/M{in-ax}] could not gossip first range descriptor: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:50.067550 70486 storage/raft_transport.go:584 while processing outgoing Raft queue to node 4: EOF:
[08:42:54][Step 2/2] --- PASS: TestStoreRangeRemoveDead (1.05s)
[08:42:54][Step 2/2] === RUN TestReplicateRogueRemovedNode
[08:42:54][Step 2/2] I181018 08:38:50.141777 70457 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41295" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:50.236557 70457 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:50.237663 70457 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:34709" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:50.266947 70888 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:41295
[08:42:54][Step 2/2] W181018 08:38:50.369020 70457 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:50.370213 70457 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:33417" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:50.372518 70960 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:41295
[08:42:54][Step 2/2] I181018 08:38:50.455659 70457 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 8435e006 at applied index 16
[08:42:54][Step 2/2] I181018 08:38:50.457167 70457 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:50.458642 70986 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=8435e006, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:50.461692 70986 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:50.464503 70457 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:50.472363 70457 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=c758bda3] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:50.487422 70457 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 67983b75 at applied index 18
[08:42:54][Step 2/2] I181018 08:38:50.490061 70457 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:50.494568 70906 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=67983b75, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:50.498641 70906 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:50.505003 70457 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:50.525692 70457 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=eda03905] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:50.699123 70457 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:50.716053 70457 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=50bb2f7f] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2] next=4
[08:42:54][Step 2/2] I181018 08:38:50.730663 70457 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:50.743305 70457 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=773f6e8b] proposing REMOVE_REPLICA((n2,s2):2): updated=[(n1,s1):1] next=4
[08:42:54][Step 2/2] I181018 08:38:50.757675 70457 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:54][Step 2/2] I181018 08:38:50.766727 70457 storage/store.go:2580 [replicaGC,s2,r1/2:/M{in-ax}] removing replica r1/2
[08:42:54][Step 2/2] I181018 08:38:50.768647 70457 storage/replica.go:863 [replicaGC,s2,r1/2:/M{in-ax}] removed 50 (44+6) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] E181018 08:38:50.770599 70908 storage/store.go:3657 [s1,r1/1:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:54][Step 2/2] E181018 08:38:50.804204 70908 storage/store.go:3657 [s3,r1/3:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:54][Step 2/2] E181018 08:38:50.804644 71009 storage/store.go:3638 [s3,r1/3:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:54][Step 2/2] W181018 08:38:50.806605 71100 storage/replica.go:6500 [s3,r1/3:/M{in-ax}] could not acquire lease for range gossip: r1 was not found on s3
[08:42:54][Step 2/2] W181018 08:38:50.806878 71100 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip first range descriptor: r1 was not found on s3
[08:42:54][Step 2/2] W181018 08:38:50.806966 71101 storage/replica.go:6500 [s3,r1/3:/M{in-ax}] could not acquire lease for range gossip: r1 was not found on s3
[08:42:54][Step 2/2] W181018 08:38:50.807396 71101 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip system config: r1 was not found on s3
[08:42:54][Step 2/2] W181018 08:38:50.807080 70481 storage/replica.go:6500 [s3,r1/3:/M{in-ax}] could not acquire lease for range gossip: r1 was not found on s3
[08:42:54][Step 2/2] W181018 08:38:50.807783 70481 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip node liveness: r1 was not found on s3
[08:42:54][Step 2/2] W181018 08:38:50.858357 70481 storage/replica.go:6500 [s3,r1/3:/M{in-ax}] could not acquire lease for range gossip: r1 was not found on s3
[08:42:54][Step 2/2] W181018 08:38:50.858727 70481 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip node liveness: r1 was not found on s3
[08:42:54][Step 2/2] W181018 08:38:50.859505 71100 storage/replica.go:6500 [s3,r1/3:/M{in-ax}] could not acquire lease for range gossip: r1 was not found on s3
[08:42:54][Step 2/2] W181018 08:38:50.859712 71100 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip first range descriptor: r1 was not found on s3
[08:42:54][Step 2/2] W181018 08:38:50.862250 71101 storage/replica.go:6500 [s3,r1/3:/M{in-ax}] could not acquire lease for range gossip: r1 was not found on s3
[08:42:54][Step 2/2] W181018 08:38:50.862442 71101 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip system config: r1 was not found on s3
[08:42:54][Step 2/2] I181018 08:38:50.915278 70457 storage/client_test.go:1252 test clock advanced to: 864003.600000128,0
[08:42:54][Step 2/2] I181018 08:38:50.925117 70457 storage/store.go:2580 [replicaGC,s3,r1/3:/M{in-ax}] removing replica r1/3
[08:42:54][Step 2/2] I181018 08:38:50.926882 70457 storage/replica.go:863 [replicaGC,s3,r1/3:/M{in-ax}] removed 49 (44+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:54][Step 2/2] --- PASS: TestReplicateRogueRemovedNode (0.89s)
[08:42:54][Step 2/2] === RUN TestReplicateRemovedNodeDisruptiveElection
[08:42:54][Step 2/2] I181018 08:38:51.031163 70764 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:45073" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:51.085315 70764 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:51.086258 70764 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:39921" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:51.088023 70874 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:45073
[08:42:54][Step 2/2] W181018 08:38:51.128568 70764 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:51.129437 70764 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:37359" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:51.131206 71327 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:45073
[08:42:54][Step 2/2] W181018 08:38:51.180821 70764 gossip/gossip.go:1496 [n4] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:51.182107 70764 gossip/gossip.go:393 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:44091" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:51.183957 71462 gossip/client.go:129 [n4] started gossip client to 127.0.0.1:45073
[08:42:54][Step 2/2] I181018 08:38:51.215241 70764 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot d96d0052 at applied index 17
[08:42:54][Step 2/2] I181018 08:38:51.216442 70764 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 50, log entries: 7, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:51.217905 71574 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 17 (id=d96d0052, encoded size=8514, 1 rocksdb batches, 7 log entries)
[08:42:54][Step 2/2] I181018 08:38:51.221027 71574 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:51.224338 70764 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:51.233060 70764 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=457452d3] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:51.244901 70764 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot a7570fde at applied index 19
[08:42:54][Step 2/2] I181018 08:38:51.246523 70764 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 53, log entries: 9, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:51.249353 71604 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 19 (id=a7570fde, encoded size=9456, 1 rocksdb batches, 9 log entries)
[08:42:54][Step 2/2] I181018 08:38:51.253123 71604 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:51.256752 70764 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:51.283377 70764 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=cce178b7] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:51.306478 70764 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 25fd5239 at applied index 21
[08:42:54][Step 2/2] I181018 08:38:51.308947 70764 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n4,s4):?: kv pairs: 56, log entries: 11, rate-limit: 2.0 MiB/sec, 12ms
[08:42:54][Step 2/2] I181018 08:38:51.313249 71577 storage/replica_raftstorage.go:804 [s4,r1/?:{-}] applying preemptive snapshot at index 21 (id=25fd5239, encoded size=10463, 1 rocksdb batches, 11 log entries)
[08:42:54][Step 2/2] I181018 08:38:51.321621 71577 storage/replica_raftstorage.go:810 [s4,r1/?:/M{in-ax}] applied preemptive snapshot in 8ms [clear=0ms batch=0ms entries=5ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:51.325187 70764 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n4,s4):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:51.339165 70764 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=023ce893] proposing ADD_REPLICA((n4,s4):4): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3 (n4,s4):4] next=5
[08:42:54][Step 2/2] I181018 08:38:51.713372 70764 storage/replica_command.go:816 [s2,r1/2:/M{in-ax}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, (n4,s4):4, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:38:51.771353 70764 storage/replica.go:3884 [s2,r1/2:/M{in-ax},txn=a229ac92] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n4,s4):4 (n2,s2):2 (n3,s3):3] next=5
[08:42:54][Step 2/2] I181018 08:38:51.784152 71617 storage/store.go:2580 [replicaGC,s1,r1/1:/M{in-ax}] removing replica r1/1
[08:42:54][Step 2/2] I181018 08:38:51.786034 71617 storage/replica.go:863 [replicaGC,s1,r1/1:/M{in-ax}] removed 49 (44+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:51.838060 71472 storage/store.go:3659 [s2,r1/2:/M{in-ax}] added to replica GC queue (contacted deleted peer)
[08:42:54][Step 2/2] I181018 08:38:51.843998 71638 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] storage.replicaGC: processing replica
[08:42:54][Step 2/2] E181018 08:38:51.844988 71598 storage/queue.go:791 [replicaGC,s2,r1/2:/M{in-ax}] node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:51.849517 71540 storage/store.go:1490 [s4,r1/4:/M{in-ax}] could not gossip first range descriptor: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:51.871909 71610 storage/raft_transport.go:282 unable to accept Raft message from (n4,s4):?: no handler registered for (n2,s2):?
[08:42:54][Step 2/2] W181018 08:38:51.874771 71608 storage/store.go:3662 [s4] raft error: node 2 claims to not contain store 2 for replica (n2,s2):?: store 2 was not found
[08:42:54][Step 2/2] W181018 08:38:51.875232 71606 storage/raft_transport.go:584 while processing outgoing Raft queue to node 2: store 2 was not found:
[08:42:54][Step 2/2] W181018 08:38:51.898463 71611 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: EOF:
[08:42:54][Step 2/2] --- PASS: TestReplicateRemovedNodeDisruptiveElection (0.93s)
[08:42:54][Step 2/2] === RUN TestReplicaTooOldGC
[08:42:54][Step 2/2] I181018 08:38:51.983361 71652 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:34885" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:52.055486 71652 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:52.058029 71628 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:34885
[08:42:54][Step 2/2] I181018 08:38:52.059567 71652 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:37035" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:52.128757 71652 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:52.129622 71652 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:40075" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:52.132956 71988 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:34885
[08:42:54][Step 2/2] W181018 08:38:52.185510 71652 gossip/gossip.go:1496 [n4] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:52.186406 71652 gossip/gossip.go:393 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:45907" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:52.188199 71777 gossip/client.go:129 [n4] started gossip client to 127.0.0.1:34885
[08:42:54][Step 2/2] I181018 08:38:52.222282 71652 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 8d9ba4a0 at applied index 17
[08:42:54][Step 2/2] I181018 08:38:52.224046 71652 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 50, log entries: 7, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:38:52.226231 72131 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 17 (id=8d9ba4a0, encoded size=8514, 1 rocksdb batches, 7 log entries)
[08:42:54][Step 2/2] I181018 08:38:52.230061 72131 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:52.234130 71652 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:52.243648 71652 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=88d984d1] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:52.265920 71652 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 87477023 at applied index 19
[08:42:54][Step 2/2] I181018 08:38:52.270778 71652 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 53, log entries: 9, rate-limit: 2.0 MiB/sec, 15ms
[08:42:54][Step 2/2] I181018 08:38:52.280876 72133 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 19 (id=87477023, encoded size=9456, 1 rocksdb batches, 9 log entries)
[08:42:54][Step 2/2] I181018 08:38:52.284892 72133 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:52.289174 71652 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:52.319838 71652 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b34ed78e] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:52.335649 71652 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot aea0f766 at applied index 21
[08:42:54][Step 2/2] I181018 08:38:52.337243 71652 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n4,s4):?: kv pairs: 56, log entries: 11, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:52.339838 72154 storage/replica_raftstorage.go:804 [s4,r1/?:{-}] applying preemptive snapshot at index 21 (id=aea0f766, encoded size=10463, 1 rocksdb batches, 11 log entries)
[08:42:54][Step 2/2] I181018 08:38:52.345886 72154 storage/replica_raftstorage.go:810 [s4,r1/?:/M{in-ax}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=4ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:52.350977 71652 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n4,s4):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:52.383352 71652 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=59206665] proposing ADD_REPLICA((n4,s4):4): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3 (n4,s4):4] next=5
[08:42:54][Step 2/2] I181018 08:38:52.530624 71652 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n4,s4):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, (n4,s4):4, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:38:52.564214 71652 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=103dced5] proposing REMOVE_REPLICA((n4,s4):4): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=5
[08:42:54][Step 2/2] I181018 08:38:53.392518 72029 storage/store.go:3640 [s4,r1/4:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:53.392801 72150 storage/store.go:3640 [s4,r1/4:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:53.392973 72121 storage/store.go:3640 [s4,r1/4:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:38:53.395496 72127 storage/store.go:2580 [replicaGC,s4,r1/4:/M{in-ax}] removing replica r1/4
[08:42:54][Step 2/2] I181018 08:38:53.397286 72127 storage/replica.go:863 [replicaGC,s4,r1/4:/M{in-ax}] removed 50 (45+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:53.575113 71724 gossip/gossip.go:1510 [n2] node has connected to cluster via gossip
[08:42:54][Step 2/2] W181018 08:38:53.675563 71878 storage/raft_transport.go:584 while processing outgoing Raft queue to node 4: EOF:
[08:42:54][Step 2/2] --- PASS: TestReplicaTooOldGC (1.78s)
[08:42:54][Step 2/2] === RUN TestReplicaLazyLoad
[08:42:54][Step 2/2] W181018 08:38:53.747359 72365 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip system config: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:53.747501 72366 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip node liveness: periodic gossip is disabled
[08:42:54][Step 2/2] I181018 08:38:53.763830 72134 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:43007" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:53.803282 72134 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key /Table/50 [r2]
[08:42:54][Step 2/2] I181018 08:38:53.827834 72134 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] storage.replicate: processing replica
[08:42:54][Step 2/2] 1 [async] kv.TxnCoordSender: heartbeat loop
[08:42:54][Step 2/2] I181018 08:38:53.828507 72134 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] storage.replicate: processing replica
[08:42:54][Step 2/2] W181018 08:38:53.831735 72389 internal/client/txn.go:532 [hb] failure aborting transaction: node unavailable; try another peer; abort caused by: result is ambiguous (server shutdown)
[08:42:54][Step 2/2] I181018 08:38:53.831988 72389 storage/node_liveness.go:790 [hb] retrying liveness update after storage.errRetryLiveness: result is ambiguous (server shutdown)
[08:42:54][Step 2/2] W181018 08:38:53.832359 72389 storage/node_liveness.go:454 [hb] failed node liveness heartbeat: context canceled
[08:42:54][Step 2/2] I181018 08:38:53.832514 72129 storage/queue.go:876 [replicate] purgatory is now empty
[08:42:54][Step 2/2] W181018 08:38:53.866008 72470 storage/store.go:1490 [s1,r1/1:/{Min-Table/50}] could not gossip system config: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:53.866022 72471 storage/store.go:1490 [s1,r1/1:/{Min-Table/50}] could not gossip node liveness: periodic gossip is disabled
[08:42:54][Step 2/2] --- PASS: TestReplicaLazyLoad (0.24s)
[08:42:54][Step 2/2] === RUN TestReplicateReAddAfterDown
[08:42:54][Step 2/2] I181018 08:38:54.025376 72288 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:40305" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:54.077189 72288 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:54.078444 72288 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:32901" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:54.085360 72726 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:40305
[08:42:54][Step 2/2] W181018 08:38:54.164339 72288 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:54.165618 72288 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:42061" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:54.167684 72487 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:40305
[08:42:54][Step 2/2] I181018 08:38:54.194247 72288 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot c3d80548 at applied index 16
[08:42:54][Step 2/2] I181018 08:38:54.195845 72288 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:54.197655 72731 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=c3d80548, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:54.200299 72731 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:54.203655 72288 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:54.213797 72288 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=405c85a0] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:54.237293 72288 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 60c040a7 at applied index 18
[08:42:54][Step 2/2] I181018 08:38:54.238829 72288 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 15ms
[08:42:54][Step 2/2] I181018 08:38:54.241353 72736 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=60c040a7, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:54.244996 72736 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:54.248831 72288 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:54.268495 72288 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=2a5ed350] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:54.438411 72288 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:54.456312 72288 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=56d5ad21] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2] next=4
[08:42:54][Step 2/2] I181018 08:38:54.508719 72288 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 9d642cc7 at applied index 24
[08:42:54][Step 2/2] I181018 08:38:54.510903 72288 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 60, log entries: 14, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:38:54.512532 72963 storage/replica_raftstorage.go:804 [s3,r1/3:/M{in-ax}] applying preemptive snapshot at index 24 (id=9d642cc7, encoded size=11477, 1 rocksdb batches, 14 log entries)
[08:42:54][Step 2/2] I181018 08:38:54.519841 72963 storage/replica_raftstorage.go:810 [s3,r1/3:/M{in-ax}] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=5ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:38:54.526094 72288 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):4): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:38:54.548946 72288 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=71d0900c] proposing ADD_REPLICA((n3,s3):4): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):4] next=5
[08:42:54][Step 2/2] W181018 08:38:54.622072 72491 storage/raft_transport.go:584 while processing outgoing Raft queue to node 2: EOF:
[08:42:54][Step 2/2] --- PASS: TestReplicateReAddAfterDown (0.76s)
[08:42:54][Step 2/2] === RUN TestLeaseHolderRemoveSelf
[08:42:54][Step 2/2] I181018 08:38:54.809552 72741 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:37187" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:54.883983 72741 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:54.886735 72752 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:37187
[08:42:54][Step 2/2] I181018 08:38:54.888275 72741 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45575" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:54.918916 72741 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 6fe4a681 at applied index 15
[08:42:54][Step 2/2] I181018 08:38:54.925917 72741 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 48, log entries: 5, rate-limit: 2.0 MiB/sec, 11ms
[08:42:54][Step 2/2] I181018 08:38:54.928282 73192 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 15 (id=6fe4a681, encoded size=8165, 1 rocksdb batches, 5 log entries)
[08:42:54][Step 2/2] I181018 08:38:54.934773 73192 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=4ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:54.940370 72741 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:54.952278 72741 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=c513a76d] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:55.108819 72741 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:55.123963 72741 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=4f041a26] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
[08:42:54][Step 2/2] E181018 08:38:55.124354 72741 storage/replica.go:3893 [s1,r1/1:/M{in-ax},txn=4f041a26] received invalid ChangeReplicasTrigger REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3 to remove self (leaseholder)
[08:42:54][Step 2/2] --- PASS: TestLeaseHolderRemoveSelf (0.49s)
[08:42:54][Step 2/2] === RUN TestRemovedReplicaError
[08:42:54][Step 2/2] I181018 08:38:55.293631 73099 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:34281" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:55.356209 73099 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:55.357066 73099 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:35159" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:55.359168 73444 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:34281
[08:42:54][Step 2/2] I181018 08:38:55.380458 73099 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 4bd1cd97 at applied index 15
[08:42:54][Step 2/2] I181018 08:38:55.382035 73099 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 48, log entries: 5, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:55.384015 73226 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 15 (id=4bd1cd97, encoded size=8165, 1 rocksdb batches, 5 log entries)
[08:42:54][Step 2/2] I181018 08:38:55.387085 73226 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:55.390664 73099 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:55.402935 73099 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=0a464e34] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:55.720249 73099 storage/replica_command.go:816 [s2,r1/2:/M{in-ax}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:55.748437 73099 storage/replica.go:3884 [s2,r1/2:/M{in-ax},txn=fe399186] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
[08:42:54][Step 2/2] E181018 08:38:55.756709 73101 storage/replica_proposal.go:721 [s1,r1/1:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:54][Step 2/2] E181018 08:38:55.758611 73463 storage/store.go:3638 [s1,r1/1:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:54][Step 2/2] --- PASS: TestRemovedReplicaError (0.62s)
[08:42:54][Step 2/2] === RUN TestRemoveRangeWithoutGC
[08:42:54][Step 2/2] I181018 08:38:55.845552 72945 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:43575" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:55.884835 72945 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:55.885824 72945 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:46345" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:55.887491 73472 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:43575
[08:42:54][Step 2/2] I181018 08:38:55.908667 72945 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot f6e9b04b at applied index 15
[08:42:54][Step 2/2] I181018 08:38:55.910236 72945 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 48, log entries: 5, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:55.912410 73329 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 15 (id=f6e9b04b, encoded size=8165, 1 rocksdb batches, 5 log entries)
[08:42:54][Step 2/2] I181018 08:38:55.915308 73329 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:55.918182 72945 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:55.925916 72945 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6b20b9cb] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:56.109541 72945 storage/replica_command.go:816 [s2,r1/2:/M{in-ax}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:56.139456 72945 storage/replica.go:3884 [s2,r1/2:/M{in-ax},txn=3a53dda7] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n2,s2):2] next=3
[08:42:54][Step 2/2] E181018 08:38:56.146281 73502 storage/replica_proposal.go:721 [s1,r1/1:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:54][Step 2/2] E181018 08:38:56.148543 73232 storage/store.go:3638 [s1,r1/1:/M{in-ax}] unable to add to replica GC queue: queue stopped
[08:42:54][Step 2/2] E181018 08:38:56.180483 72945 storage/store.go:1313 [s1] [n1,s1,r1/?:/M{in-ax}]: unable to add replica to GC queue: queue disabled
[08:42:54][Step 2/2] I181018 08:38:56.194894 72945 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:54][Step 2/2] I181018 08:38:56.203462 72945 storage/store.go:2580 [replicaGC,s1,r1/?:/M{in-ax}] removing replica r1/0
[08:42:54][Step 2/2] I181018 08:38:56.204835 72945 storage/replica.go:863 [replicaGC,s1,r1/?:/M{in-ax}] removed 47 (42+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:56.207760 73736 storage/queue.go:876 [replicate] purgatory is now empty
[08:42:54][Step 2/2] --- PASS: TestRemoveRangeWithoutGC (0.43s)
[08:42:54][Step 2/2] === RUN TestTransferRaftLeadership
[08:42:54][Step 2/2] I181018 08:38:56.274627 73750 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:40407" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:56.319261 73750 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:56.320477 73750 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:41079" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:56.324815 73851 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:40407
[08:42:54][Step 2/2] W181018 08:38:56.362869 73750 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:56.363738 73750 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:45413" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:56.365554 74025 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:40407
[08:42:54][Step 2/2] I181018 08:38:56.386922 73750 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r2]
[08:42:54][Step 2/2] I181018 08:38:56.413768 73750 storage/store_snapshot.go:621 [s1,r2/1:{a-/Max}] sending preemptive snapshot 05973c5d at applied index 11
[08:42:54][Step 2/2] I181018 08:38:56.414885 73750 storage/store_snapshot.go:664 [s1,r2/1:{a-/Max}] streamed snapshot to (n2,s2):?: kv pairs: 42, log entries: 1, rate-limit: 2.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:38:56.416381 73842 storage/replica_raftstorage.go:804 [s2,r2/?:{-}] applying preemptive snapshot at index 11 (id=05973c5d, encoded size=7467, 1 rocksdb batches, 1 log entries)
[08:42:54][Step 2/2] I181018 08:38:56.417888 73842 storage/replica_raftstorage.go:810 [s2,r2/?:{a-/Max}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:56.421091 73750 storage/replica_command.go:816 [s1,r2/1:{a-/Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:{a-/Max} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:56.434061 73750 storage/replica.go:3884 [s1,r2/1:{a-/Max},txn=67798bb7] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:56.444299 73750 storage/store_snapshot.go:621 [s1,r2/1:{a-/Max}] sending preemptive snapshot 283bb4aa at applied index 14
[08:42:54][Step 2/2] I181018 08:38:56.446460 73750 storage/store_snapshot.go:664 [s1,r2/1:{a-/Max}] streamed snapshot to (n3,s3):?: kv pairs: 44, log entries: 4, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:56.448068 74155 storage/replica_raftstorage.go:804 [s3,r2/?:{-}] applying preemptive snapshot at index 14 (id=283bb4aa, encoded size=8360, 1 rocksdb batches, 4 log entries)
[08:42:54][Step 2/2] I181018 08:38:56.450361 74155 storage/replica_raftstorage.go:810 [s3,r2/?:{a-/Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:56.454015 73750 storage/replica_command.go:816 [s1,r2/1:{a-/Max}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r2:{a-/Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:56.472693 73750 storage/replica.go:3884 [s1,r2/1:{a-/Max},txn=51bedd35] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:38:56.504220 73750 storage/client_test.go:1252 test clock advanced to: 60000.000000125,0
[08:42:54][Step 2/2] I181018 08:38:56.519006 73945 storage/replica_proposal.go:212 [s2,r2/2:{a-/Max}] new range lease repl=(n2,s2):2 seq=2 start=30000.000000123,6 epo=1 pro=60000.000000125,5 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=30000.000000123,5 pro=0.000000123,6
[08:42:54][Step 2/2] W181018 08:38:56.600387 74156 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: EOF:
[08:42:54][Step 2/2] --- PASS: TestTransferRaftLeadership (0.38s)
[08:42:54][Step 2/2] === RUN TestFailedPreemptiveSnapshot
[08:42:54][Step 2/2] I181018 08:38:56.685811 73218 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:39005" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:56.742805 73218 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:56.743986 73218 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:34583" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:56.747565 74413 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:39005
[08:42:54][Step 2/2] I181018 08:38:56.767820 73218 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot a4371987 at applied index 15
[08:42:54][Step 2/2] I181018 08:38:56.769599 73218 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 48, log entries: 5, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:56.771472 74416 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 15 (id=a4371987, encoded size=8165, 1 rocksdb batches, 5 log entries)
[08:42:54][Step 2/2] I181018 08:38:56.774642 74416 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:56.778410 73218 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:56.788037 73218 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=937a8bc9] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:57.065571 73218 rpc/nodedialer/nodedialer.go:91 unable to connect to n3: unknown peer 3
[08:42:54][Step 2/2] --- PASS: TestFailedPreemptiveSnapshot (0.48s)
[08:42:54][Step 2/2] === RUN TestRaftBlockedReplica
[08:42:54][Step 2/2] I181018 08:38:57.157126 74426 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:38757" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:57.198863 74426 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:57.199840 74426 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:33749" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:57.201467 74178 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:38757
[08:42:54][Step 2/2] W181018 08:38:57.249563 74426 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:57.250752 74426 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:44883" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:57.254634 74788 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:38757
[08:42:54][Step 2/2] I181018 08:38:57.278107 74426 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:54][Step 2/2] I181018 08:38:57.303995 74426 storage/store_snapshot.go:621 [s1,r1/1:{/Min-b}] sending preemptive snapshot e095b21b at applied index 19
[08:42:54][Step 2/2] I181018 08:38:57.305411 74426 storage/store_snapshot.go:664 [s1,r1/1:{/Min-b}] streamed snapshot to (n2,s2):?: kv pairs: 18, log entries: 9, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:57.307664 74791 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 19 (id=e095b21b, encoded size=2864, 1 rocksdb batches, 9 log entries)
[08:42:54][Step 2/2] I181018 08:38:57.311488 74791 storage/replica_raftstorage.go:810 [s2,r1/?:{/Min-b}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:57.315482 74426 storage/replica_command.go:816 [s1,r1/1:{/Min-b}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:{/Min-b} [(n1,s1):1, next=2, gen=1]
[08:42:54][Step 2/2] I181018 08:38:57.324397 74426 storage/replica.go:3884 [s1,r1/1:{/Min-b},txn=d855c0b2] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:57.336684 74426 storage/store_snapshot.go:621 [s1,r1/1:{/Min-b}] sending preemptive snapshot 43da522d at applied index 21
[08:42:54][Step 2/2] I181018 08:38:57.338124 74426 storage/store_snapshot.go:664 [s1,r1/1:{/Min-b}] streamed snapshot to (n3,s3):?: kv pairs: 21, log entries: 11, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:57.340316 74683 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 21 (id=43da522d, encoded size=3810, 1 rocksdb batches, 11 log entries)
[08:42:54][Step 2/2] I181018 08:38:57.344754 74683 storage/replica_raftstorage.go:810 [s3,r1/?:{/Min-b}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:57.353690 74426 storage/replica_command.go:816 [s1,r1/1:{/Min-b}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:{/Min-b} [(n1,s1):1, (n2,s2):2, next=3, gen=1]
[08:42:54][Step 2/2] I181018 08:38:57.375162 74426 storage/replica.go:3884 [s1,r1/1:{/Min-b},txn=e130013a] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] --- PASS: TestRaftBlockedReplica (0.90s)
[08:42:54][Step 2/2] === RUN TestRangeQuiescence
[08:42:54][Step 2/2] W181018 08:38:58.037931 74922 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip system config: periodic gossip is disabled
[08:42:54][Step 2/2] W181018 08:38:58.038052 74923 storage/store.go:1490 [s1,r1/1:/M{in-ax}] could not gossip node liveness: periodic gossip is disabled
[08:42:54][Step 2/2] I181018 08:38:58.047081 74817 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:38415" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:58.117194 74817 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:58.118381 74817 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:37945" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:58.120391 75075 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:38415
[08:42:54][Step 2/2] W181018 08:38:58.170115 74817 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:58.171148 74817 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:44793" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:58.172896 75092 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:38415
[08:42:54][Step 2/2] I181018 08:38:58.237309 74817 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 17aea938 at applied index 16
[08:42:54][Step 2/2] I181018 08:38:58.238792 74817 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 16ms
[08:42:54][Step 2/2] I181018 08:38:58.240282 75097 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=17aea938, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:58.243026 75097 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:58.248075 74817 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:58.257857 74817 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=4ab767fa] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:58.269258 74817 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 703a9d4f at applied index 18
[08:42:54][Step 2/2] I181018 08:38:58.270749 74817 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:58.273004 75069 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=703a9d4f, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:58.277171 75069 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:58.282955 74817 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:58.306348 74817 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=79102391] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] --- PASS: TestRangeQuiescence (1.07s)
[08:42:54][Step 2/2] === RUN TestInitRaftGroupOnRequest
[08:42:54][Step 2/2] I181018 08:38:59.125699 75194 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:37011" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:59.181304 75194 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:59.182498 75194 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:35427" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:59.184327 75439 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:37011
[08:42:54][Step 2/2] I181018 08:38:59.207255 75194 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key /Table/50 [r2]
[08:42:54][Step 2/2] I181018 08:38:59.239450 75194 storage/store_snapshot.go:621 [s1,r2/1:/{Table/50-Max}] sending preemptive snapshot 90ad313d at applied index 11
[08:42:54][Step 2/2] I181018 08:38:59.240687 75194 storage/store_snapshot.go:664 [s1,r2/1:/{Table/50-Max}] streamed snapshot to (n2,s2):?: kv pairs: 6, log entries: 1, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:59.242456 75471 storage/replica_raftstorage.go:804 [s2,r2/?:{-}] applying preemptive snapshot at index 11 (id=90ad313d, encoded size=284, 1 rocksdb batches, 1 log entries)
[08:42:54][Step 2/2] I181018 08:38:59.244204 75471 storage/replica_raftstorage.go:810 [s2,r2/?:/{Table/50-Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:59.247519 75194 storage/replica_command.go:816 [s1,r2/1:/{Table/50-Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/{Table/50-Max} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:59.264238 75194 storage/replica.go:3884 [s1,r2/1:/{Table/50-Max},txn=09325be1] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] W181018 08:38:59.417281 75572 storage/node_liveness.go:454 [hb] failed node liveness heartbeat: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:59.418488 75667 internal/client/txn.go:532 [hb] failure aborting transaction: node unavailable; try another peer; abort caused by: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:38:59.418924 75667 storage/node_liveness.go:454 [hb] failed node liveness heartbeat: node unavailable; try another peer
[08:42:54][Step 2/2] I181018 08:38:59.420141 75474 internal/client/txn.go:637 async rollback failed: node unavailable; try another peer
[08:42:54][Step 2/2] --- PASS: TestInitRaftGroupOnRequest (0.44s)
[08:42:54][Step 2/2] === RUN TestFailedConfChange
[08:42:54][Step 2/2] I181018 08:38:59.595256 75663 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:33799" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:38:59.645122 75663 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:59.646470 75663 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:35149" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:59.648192 75765 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:33799
[08:42:54][Step 2/2] W181018 08:38:59.714099 75663 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:38:59.715284 75663 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:35581" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:38:59.717031 75776 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:33799
[08:42:54][Step 2/2] I181018 08:38:59.740138 75663 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 9311bba8 at applied index 16
[08:42:54][Step 2/2] I181018 08:38:59.742090 75663 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:59.744317 75969 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=9311bba8, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:38:59.747931 75969 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:59.751057 75663 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:38:59.758771 75663 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=d3e1c5e8] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:38:59.909761 75663 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 1c1442f7 at applied index 18
[08:42:54][Step 2/2] I181018 08:38:59.911431 75663 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:38:59.914894 75679 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=1c1442f7, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:38:59.919209 75679 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=3ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:38:59.923654 75663 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:38:59.937256 75663 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=536194d3] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:39:00.001984 76026 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 storage.replicate: purgatory processing replica
[08:42:54][Step 2/2] --- PASS: TestFailedConfChange (0.54s)
[08:42:54][Step 2/2] === RUN TestStoreRangeRemovalCompactionSuggestion
[08:42:54][Step 2/2] I181018 08:39:00.100527 75484 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:40859" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:00.140730 75484 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:00.141555 75484 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:35913" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:00.143051 76009 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:40859
[08:42:54][Step 2/2] W181018 08:39:00.187005 75484 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:00.188231 75484 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:46517" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:00.190685 76270 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:40859
[08:42:54][Step 2/2] I181018 08:39:00.220331 75484 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot fb3da2b7 at applied index 16
[08:42:54][Step 2/2] I181018 08:39:00.221682 75484 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:00.223520 76375 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=fb3da2b7, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:39:00.226866 76375 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:00.229543 75484 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:00.237718 75484 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=b51e4e5a] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:39:00.248483 75484 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 8996a3ca at applied index 18
[08:42:54][Step 2/2] I181018 08:39:00.250365 75484 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:00.251910 76150 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=8996a3ca, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:39:00.255289 76150 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:00.258357 75484 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:39:00.279366 75484 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=f975df5c] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:39:00.430759 75484 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:39:00.452707 75484 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=41bd0d4b] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2] next=4
[08:42:54][Step 2/2] I181018 08:39:00.463778 76152 storage/store.go:3640 [s3,r1/3:/M{in-ax}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:39:00.465268 76283 storage/store.go:2580 [replicaGC,s3,r1/3:/M{in-ax}] removing replica r1/3
[08:42:54][Step 2/2] I181018 08:39:00.466829 76283 storage/replica.go:863 [replicaGC,s3,r1/3:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:54][Step 2/2] --- PASS: TestStoreRangeRemovalCompactionSuggestion (0.47s)
[08:42:54][Step 2/2] === RUN TestStoreRangeWaitForApplication
[08:42:54][Step 2/2] I181018 08:39:00.592761 76284 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:46223" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:00.637711 76284 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:00.638641 76284 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45201" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:00.640399 76525 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:46223
[08:42:54][Step 2/2] W181018 08:39:00.749017 76284 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:00.750088 76284 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:38293" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:00.751824 76393 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:46223
[08:42:54][Step 2/2] I181018 08:39:00.777919 76284 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r2]
[08:42:54][Step 2/2] I181018 08:39:00.810901 76284 storage/store_snapshot.go:621 [s1,r2/1:{a-/Max}] sending preemptive snapshot c2a50829 at applied index 11
[08:42:54][Step 2/2] I181018 08:39:00.813193 76284 storage/store_snapshot.go:664 [s1,r2/1:{a-/Max}] streamed snapshot to (n2,s2):?: kv pairs: 42, log entries: 1, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:39:00.815168 76751 storage/replica_raftstorage.go:804 [s2,r2/?:{-}] applying preemptive snapshot at index 11 (id=c2a50829, encoded size=7463, 1 rocksdb batches, 1 log entries)
[08:42:54][Step 2/2] I181018 08:39:00.817217 76751 storage/replica_raftstorage.go:810 [s2,r2/?:{a-/Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:00.820940 76284 storage/replica_command.go:816 [s1,r2/1:{a-/Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:{a-/Max} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:00.834615 76284 storage/replica.go:3884 [s1,r2/1:{a-/Max},txn=ee509633] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:39:00.848031 76284 storage/store_snapshot.go:621 [s1,r2/1:{a-/Max}] sending preemptive snapshot fcb60ba7 at applied index 14
[08:42:54][Step 2/2] I181018 08:39:00.849630 76284 storage/store_snapshot.go:664 [s1,r2/1:{a-/Max}] streamed snapshot to (n3,s3):?: kv pairs: 44, log entries: 4, rate-limit: 2.0 MiB/sec, 6ms
[08:42:54][Step 2/2] I181018 08:39:00.854757 76757 storage/replica_raftstorage.go:804 [s3,r2/?:{-}] applying preemptive snapshot at index 14 (id=fcb60ba7, encoded size=8356, 1 rocksdb batches, 4 log entries)
[08:42:54][Step 2/2] I181018 08:39:00.857765 76757 storage/replica_raftstorage.go:810 [s3,r2/?:{a-/Max}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:00.861193 76284 storage/replica_command.go:816 [s1,r2/1:{a-/Max}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r2:{a-/Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:39:00.879486 76284 storage/replica.go:3884 [s1,r2/1:{a-/Max},txn=c439d420] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:39:01.093461 76284 storage/replica_command.go:816 [s1,r2/1:{a-/Max}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r2:{a-/Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:39:01.111301 76284 storage/replica.go:3884 [s1,r2/1:{a-/Max},txn=0f05c1df] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2] next=4
[08:42:54][Step 2/2] E181018 08:39:01.122768 76774 storage/store.go:3638 [s3,r2/3:{a-/Max}] unable to add to replica GC queue: queue disabled
[08:42:54][Step 2/2] E181018 08:39:01.127924 76672 storage/replica_proposal.go:721 [s3,r2/3:{a-/Max}] unable to add to replica GC queue: queue disabled
[08:42:54][Step 2/2] I181018 08:39:01.129850 76284 storage/store.go:2580 [s3,r2/3:{a-/Max}] removing replica r2/3
[08:42:54][Step 2/2] I181018 08:39:01.131150 76284 storage/replica.go:863 [s3,r2/3:{a-/Max}] removed 43 (37+6) keys in 1ms [clear=0ms commit=0ms]
[08:42:54][Step 2/2] --- PASS: TestStoreRangeWaitForApplication (0.76s)
[08:42:54][Step 2/2] === RUN TestReplicaGCQueueDropReplicaDirect
[08:42:54][Step 2/2] I181018 08:39:01.345310 76827 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:45991" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:01.416229 76827 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:01.416927 76827 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:44849" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:01.420622 77043 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:45991
[08:42:54][Step 2/2] W181018 08:39:01.486471 76827 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:01.487630 76827 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:35887" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:01.489441 77145 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:45991
[08:42:54][Step 2/2] I181018 08:39:01.510675 76827 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 12d3c14d at applied index 16
[08:42:54][Step 2/2] I181018 08:39:01.512364 76827 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:01.513996 77171 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=12d3c14d, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:39:01.517130 77171 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:01.527516 76827 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:01.536172 76827 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6543a700] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:39:01.549189 76827 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 53df081d at applied index 18
[08:42:54][Step 2/2] I181018 08:39:01.550470 76827 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:01.552378 77187 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=53df081d, encoded size=9280, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:39:01.556097 77187 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:01.559529 76827 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:39:01.581762 76827 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=7e268dcf] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:39:01.865602 76827 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:39:01.885520 76827 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=8675e81c] proposing REMOVE_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:39:01.904763 77036 storage/store.go:2580 [replicaGC,s2,r1/2:/M{in-ax}] removing replica r1/2
[08:42:54][Step 2/2] I181018 08:39:01.906561 77036 storage/replica.go:863 [replicaGC,s2,r1/2:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] W181018 08:39:01.925955 77127 storage/store.go:1490 [s3,r1/3:/M{in-ax}] could not gossip first range descriptor: node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:39:01.926115 77167 storage/raft_transport.go:282 unable to accept Raft message from (n3,s3):?: no handler registered for (n1,s1):?
[08:42:54][Step 2/2] W181018 08:39:01.927294 77034 storage/store.go:3662 [s3] raft error: node 1 claims to not contain store 1 for replica (n1,s1):?: store 1 was not found
[08:42:54][Step 2/2] W181018 08:39:01.927624 77190 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: store 1 was not found:
[08:42:54][Step 2/2] W181018 08:39:01.947033 77169 storage/raft_transport.go:584 while processing outgoing Raft queue to node 3: EOF:
[08:42:54][Step 2/2] --- PASS: TestReplicaGCQueueDropReplicaDirect (0.67s)
[08:42:54][Step 2/2] === RUN TestReplicaGCQueueDropReplicaGCOnScan
[08:42:54][Step 2/2] I181018 08:39:02.022784 76955 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:36421" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:02.066207 76955 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:02.067155 76955 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:39631" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:02.068872 77303 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:36421
[08:42:54][Step 2/2] W181018 08:39:02.106230 76955 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:02.107145 76955 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:42963" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:02.109545 77291 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:36421
[08:42:54][Step 2/2] I181018 08:39:02.132757 76955 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 0552a40c at applied index 16
[08:42:54][Step 2/2] I181018 08:39:02.133990 76955 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:39:02.135866 77435 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=0552a40c, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:39:02.140190 77435 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:39:02.144869 76955 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:02.152193 76955 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=29869864] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:39:02.163041 76955 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 4855fac9 at applied index 18
[08:42:54][Step 2/2] I181018 08:39:02.164726 76955 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:02.166384 77438 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=4855fac9, encoded size=9276, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:39:02.169923 77438 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:02.172961 76955 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:39:02.191257 76955 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=3ebfcc5b] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:39:02.347428 76955 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (REMOVE_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:39:02.365411 76955 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=adf018e6] proposing REMOVE_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n3,s3):3] next=4
[08:42:54][Step 2/2] E181018 08:39:02.374200 77346 storage/replica_proposal.go:721 [s2,r1/2:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:54][Step 2/2] E181018 08:39:02.377403 77441 storage/store.go:3638 [s2,r1/2:/M{in-ax}] unable to add to replica GC queue: queue disabled
[08:42:54][Step 2/2] I181018 08:39:02.385077 76955 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:54][Step 2/2] I181018 08:39:02.399642 76955 storage/store.go:2580 [replicaGC,s2,r1/2:/M{in-ax}] removing replica r1/2
[08:42:54][Step 2/2] I181018 08:39:02.401285 76955 storage/replica.go:863 [replicaGC,s2,r1/2:/M{in-ax}] removed 48 (43+5) keys in 1ms [clear=0ms commit=0ms]
[08:42:54][Step 2/2] --- PASS: TestReplicaGCQueueDropReplicaGCOnScan (0.48s)
[08:42:54][Step 2/2] === RUN TestRangeCommandClockUpdate
[08:42:54][Step 2/2] I181018 08:39:02.506164 77548 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:33871" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:02.561677 77548 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:02.562677 77548 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:42881" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:02.565728 77790 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:33871
[08:42:54][Step 2/2] W181018 08:39:02.606358 77548 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:02.607236 77548 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:39519" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:02.608820 77792 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:33871
[08:42:54][Step 2/2] I181018 08:39:02.676385 77548 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot ff628f83 at applied index 16
[08:42:54][Step 2/2] I181018 08:39:02.677597 77548 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:02.679287 77706 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=ff628f83, encoded size=8338, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:39:02.681932 77706 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:02.685247 77548 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:02.694218 77548 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=e460e40a] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:39:02.708384 77548 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 9ec60ee6 at applied index 18
[08:42:54][Step 2/2] I181018 08:39:02.709888 77548 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:39:02.711895 77943 storage/replica_raftstorage.go:804 [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=9ec60ee6, encoded size=9267, 1 rocksdb batches, 8 log entries)
[08:42:54][Step 2/2] I181018 08:39:02.715960 77943 storage/replica_raftstorage.go:810 [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:02.719023 77548 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:39:02.737246 77548 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=1a0ebbea] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] W181018 08:39:02.823882 77817 util/hlc/hlc.go:312 remote wall time is too far ahead (500ms) to be trustworthy - updating anyway
[08:42:54][Step 2/2] W181018 08:39:02.825378 77717 util/hlc/hlc.go:312 remote wall time is too far ahead (500ms) to be trustworthy - updating anyway
[08:42:54][Step 2/2] W181018 08:39:02.828098 77818 util/hlc/hlc.go:312 remote wall time is too far ahead (500ms) to be trustworthy - updating anyway
[08:42:54][Step 2/2] I181018 08:39:02.828813 77971 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] storage.pendingLeaseRequest: requesting lease
[08:42:54][Step 2/2] + exit_status=2
[08:42:54][Step 2/2] W181018 08:39:02.828118 77718 util/hlc/hlc.go:312 remote wall time is too far ahead (500ms) to be trustworthy - updating anyway
[08:42:54][Step 2/2] + go tool test2json -t
[08:42:54][Step 2/2] --- PASS: TestRangeCommandClockUpdate (0.43s)
[08:42:54][Step 2/2] + github-post
[08:42:54][Step 2/2] === RUN TestRejectFutureCommand
[08:42:54][Step 2/2] I181018 08:39:02.939226 77929 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:42781" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] --- PASS: TestRejectFutureCommand (0.11s)
[08:42:54][Step 2/2] === RUN TestTxnPutOutOfOrder
[08:42:54][Step 2/2] I181018 08:39:02.993201 78068 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:03.139445 78068 storage/replica_command.go:75 [s1,r1/1:/M{in-ax}] test injecting error: Test
[08:42:54][Step 2/2] I181018 08:39:03.166109 78068 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] kv.TxnCoordSender: heartbeat loop
[08:42:54][Step 2/2] --- PASS: TestTxnPutOutOfOrder (0.26s)
[08:42:54][Step 2/2] === RUN TestRangeLookupUseReverse
[08:42:54][Step 2/2] I181018 08:39:03.257835 78071 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:03.320487 78071 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "g" [r2]
[08:42:54][Step 2/2] I181018 08:39:03.341252 78071 storage/replica_command.go:300 [s1,r1/1:{/Min-g}] initiating a split of this range at key "e" [r3]
[08:42:54][Step 2/2] I181018 08:39:03.356648 78071 storage/replica_command.go:300 [s1,r1/1:{/Min-e}] initiating a split of this range at key "c" [r4]
[08:42:54][Step 2/2] I181018 08:39:03.373842 78071 storage/replica_command.go:300 [s1,r1/1:{/Min-c}] initiating a split of this range at key "a" [r5]
[08:42:54][Step 2/2] === RUN TestRangeLookupUseReverse/key="f"
[08:42:54][Step 2/2] === RUN TestRangeLookupUseReverse/key="g"
[08:42:54][Step 2/2] === RUN TestRangeLookupUseReverse/key="e"
[08:42:54][Step 2/2] === RUN TestRangeLookupUseReverse/key=/Max
[08:42:54][Step 2/2] === RUN TestRangeLookupUseReverse/key=/Meta2/Max
[08:42:54][Step 2/2] --- PASS: TestRangeLookupUseReverse (0.19s)
[08:42:54][Step 2/2] --- PASS: TestRangeLookupUseReverse/key="f" (0.00s)
[08:42:54][Step 2/2] --- PASS: TestRangeLookupUseReverse/key="g" (0.00s)
[08:42:54][Step 2/2] --- PASS: TestRangeLookupUseReverse/key="e" (0.00s)
[08:42:54][Step 2/2] --- PASS: TestRangeLookupUseReverse/key=/Max (0.00s)
[08:42:54][Step 2/2] --- PASS: TestRangeLookupUseReverse/key=/Meta2/Max (0.00s)
[08:42:54][Step 2/2] === RUN TestRangeTransferLeaseExpirationBased
[08:42:54][Step 2/2] === RUN TestRangeTransferLeaseExpirationBased/Transfer
[08:42:54][Step 2/2] I181018 08:39:03.499705 78238 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:44873" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:03.550018 78238 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:03.551081 78238 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:41389" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:03.560187 78501 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:44873
[08:42:54][Step 2/2] I181018 08:39:03.630767 78238 storage/client_test.go:421 gossip network initialized
[08:42:54][Step 2/2] I181018 08:39:03.639503 78238 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot cdf798f3 at applied index 16
[08:42:54][Step 2/2] I181018 08:39:03.641064 78238 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:03.642380 78523 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=cdf798f3, encoded size=8290, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:39:03.645048 78523 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:03.648032 78238 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:03.656896 78238 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=dd3441cf] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] W181018 08:39:03.811082 78238 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,154 exp=9.000000123,154 pro=0.000000123,155
[08:42:54][Step 2/2] W181018 08:39:03.811807 78238 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n2,s2):2 not lease holder; current lease is repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=9.000000123,5 pro=0.000000123,6
[08:42:54][Step 2/2] W181018 08:39:03.812525 78238 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n2,s2):2 not lease holder; current lease is repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=9.000000123,5 pro=0.000000123,6
[08:42:54][Step 2/2] W181018 08:39:03.813195 78238 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n2,s2):2 not lease holder; current lease is repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=9.000000123,5 pro=0.000000123,6
[08:42:54][Step 2/2] W181018 08:39:03.814231 78238 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n2,s2):2 not lease holder; current lease is repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=9.000000123,5 pro=0.000000123,6
[08:42:54][Step 2/2] W181018 08:39:03.815057 78238 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n2,s2):2 not lease holder; current lease is repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=9.000000123,5 pro=0.000000123,6
[08:42:54][Step 2/2] W181018 08:39:03.816041 78238 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n2,s2):2 not lease holder; current lease is repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=9.000000123,5 pro=0.000000123,6
[08:42:54][Step 2/2] === RUN TestRangeTransferLeaseExpirationBased/TransferWithExtension
[08:42:54][Step 2/2] I181018 08:39:03.902914 78409 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:39429" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:03.954810 78409 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:03.955751 78409 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:40097" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:03.957417 78752 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:39429
[08:42:54][Step 2/2] I181018 08:39:03.978920 78409 storage/client_test.go:421 gossip network initialized
[08:42:54][Step 2/2] I181018 08:39:03.989659 78409 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 8e791b6d at applied index 16
[08:42:54][Step 2/2] I181018 08:39:03.991366 78409 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:03.994991 78764 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=8e791b6d, encoded size=8290, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:39:03.999083 78764 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 4ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:04.003546 78409 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:04.012348 78409 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=8e635bda] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] W181018 08:39:04.298930 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n2,s2):2 not lease holder; current lease is repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=9.000000123,5 pro=0.000000123,6
[08:42:54][Step 2/2] W181018 08:39:04.312962 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=9.000000123,153 pro=0.000000123,154
[08:42:54][Step 2/2] W181018 08:39:04.313905 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=9.000000123,153 pro=0.000000123,154
[08:42:54][Step 2/2] W181018 08:39:04.314791 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=9.000000123,153 pro=0.000000123,154
[08:42:54][Step 2/2] W181018 08:39:04.315461 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=9.000000123,153 pro=0.000000123,154
[08:42:54][Step 2/2] W181018 08:39:04.317054 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=9.000000123,153 pro=0.000000123,154
[08:42:54][Step 2/2] W181018 08:39:04.318156 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=9.000000123,153 pro=0.000000123,154
[08:42:54][Step 2/2] W181018 08:39:04.323193 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=9.000000123,153 pro=0.000000123,154
[08:42:54][Step 2/2] W181018 08:39:04.324155 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=9.000000123,153 pro=0.000000123,154
[08:42:54][Step 2/2] W181018 08:39:04.325503 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=9.000000123,153 pro=0.000000123,154
[08:42:54][Step 2/2] W181018 08:39:04.327432 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:04.328915 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:04.330030 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:04.330801 78674 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n2,s2):2 not lease holder; current lease is repl=(n1,s1):1 seq=0 start=9.000000123,71 exp=18.000000123,71 pro=9.000000123,72
[08:42:54][Step 2/2] W181018 08:39:04.331053 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] I181018 08:39:04.334787 78807 storage/queue.go:876 [replicate] purgatory is now empty
[08:42:54][Step 2/2] W181018 08:39:04.335220 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:04.336231 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:04.338394 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:04.339945 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:04.340754 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:04.341538 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:04.342667 78409 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,153 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] === RUN TestRangeTransferLeaseExpirationBased/DrainTransfer
[08:42:54][Step 2/2] I181018 08:39:04.459405 78549 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:42633" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:04.514380 78549 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:04.515575 78549 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:38223" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:04.517426 79043 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:42633
[08:42:54][Step 2/2] I181018 08:39:04.544217 78549 storage/client_test.go:421 gossip network initialized
[08:42:54][Step 2/2] I181018 08:39:04.555120 78549 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot c766fe7e at applied index 16
[08:42:54][Step 2/2] I181018 08:39:04.556653 78549 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:04.558497 78796 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=c766fe7e, encoded size=8290, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:39:04.561313 78796 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:04.564924 78549 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:04.573123 78549 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=58cac11d] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:39:04.725995 78549 storage/store.go:1053 [drain] waiting for 1 replicas to transfer their lease away
[08:42:54][Step 2/2] W181018 08:39:04.739033 78549 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,151 exp=9.000000123,151 pro=0.000000123,152
[08:42:54][Step 2/2] I181018 08:39:04.743450 79033 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] storage.replicate: processing replica
[08:42:54][Step 2/2] I181018 08:39:04.744014 79018 storage/queue.go:876 [replicate] purgatory is now empty
[08:42:54][Step 2/2] W181018 08:39:04.764412 78798 storage/raft_transport.go:584 while processing outgoing Raft queue to node 2: rpc error: code = Canceled desc = context canceled:
[08:42:54][Step 2/2] W181018 08:39:04.764966 78802 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: rpc error: code = Unavailable desc = transport is closing:
[08:42:54][Step 2/2] === RUN TestRangeTransferLeaseExpirationBased/DrainTransferWithExtension
[08:42:54][Step 2/2] I181018 08:39:04.836890 79034 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:43545" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:04.886447 79034 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:04.887652 79034 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:36823" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:04.891009 79093 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:43545
[08:42:54][Step 2/2] I181018 08:39:04.962181 79034 storage/client_test.go:421 gossip network initialized
[08:42:54][Step 2/2] I181018 08:39:04.970630 79034 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 9abf111b at applied index 16
[08:42:54][Step 2/2] I181018 08:39:04.972387 79034 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:04.973555 79305 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=9abf111b, encoded size=8290, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:39:04.976864 79305 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:04.980201 79034 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:04.988978 79034 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=ba2bd2cb] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] W181018 08:39:05.073948 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n2,s2):2 not lease holder; current lease is repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=9.000000123,5 pro=0.000000123,6
[08:42:54][Step 2/2] W181018 08:39:05.096907 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=9.000000123,148 pro=0.000000123,149
[08:42:54][Step 2/2] W181018 08:39:05.098202 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=9.000000123,148 pro=0.000000123,149
[08:42:54][Step 2/2] W181018 08:39:05.101225 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=9.000000123,148 pro=0.000000123,149
[08:42:54][Step 2/2] W181018 08:39:05.102048 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=9.000000123,148 pro=0.000000123,149
[08:42:54][Step 2/2] W181018 08:39:05.102758 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=9.000000123,148 pro=0.000000123,149
[08:42:54][Step 2/2] W181018 08:39:05.103930 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=9.000000123,148 pro=0.000000123,149
[08:42:54][Step 2/2] W181018 08:39:05.105344 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=9.000000123,148 pro=0.000000123,149
[08:42:54][Step 2/2] W181018 08:39:05.106300 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=9.000000123,148 pro=0.000000123,149
[08:42:54][Step 2/2] W181018 08:39:05.107847 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=9.000000123,148 pro=0.000000123,149
[08:42:54][Step 2/2] W181018 08:39:05.108862 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=9.000000123,148 pro=0.000000123,149
[08:42:54][Step 2/2] W181018 08:39:05.111952 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=9.000000123,148 pro=0.000000123,149
[08:42:54][Step 2/2] W181018 08:39:05.113086 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:05.114083 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] I181018 08:39:05.114761 79020 storage/queue.go:876 [replicate] purgatory is now empty
[08:42:54][Step 2/2] W181018 08:39:05.115165 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:05.116016 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:05.116708 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:05.117627 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:05.119090 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:05.120271 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] W181018 08:39:05.123110 79034 storage/client_replica_test.go:583 [NotLeaseHolderError] r1: replica (n1,s1):1 not lease holder; current lease is repl=(n2,s2):2 seq=2 start=0.000000123,148 exp=18.000000123,3 pro=9.000000123,4
[08:42:54][Step 2/2] I181018 08:39:05.124218 79019 storage/store.go:1053 [drain] waiting for 1 replicas to transfer their lease away
[08:42:54][Step 2/2] --- PASS: TestRangeTransferLeaseExpirationBased (1.79s)
[08:42:54][Step 2/2] --- PASS: TestRangeTransferLeaseExpirationBased/Transfer (0.40s)
[08:42:54][Step 2/2] --- PASS: TestRangeTransferLeaseExpirationBased/TransferWithExtension (0.54s)
[08:42:54][Step 2/2] --- PASS: TestRangeTransferLeaseExpirationBased/DrainTransfer (0.38s)
[08:42:54][Step 2/2] --- PASS: TestRangeTransferLeaseExpirationBased/DrainTransferWithExtension (0.40s)
[08:42:54][Step 2/2] === RUN TestRangeLimitTxnMaxTimestamp
[08:42:54][Step 2/2] I181018 08:39:05.290207 79191 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:45559" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:05.343677 79191 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:05.344818 79191 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:46089" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:05.346752 79464 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:45559
[08:42:54][Step 2/2] W181018 08:39:05.354950 79213 internal/client/range_lookup.go:250 [hb,txn=0314fb4d,range-lookup=/Meta2/System/NodeLiveness/2/NULL] range lookup of key /Meta2/System/NodeLiveness/2/NULL found only non-matching ranges []; retrying
[08:42:54][Step 2/2] W181018 08:39:05.360112 79212 internal/client/range_lookup.go:250 [hb,txn=0314fb4d,range-lookup=/System/NodeLiveness/2] range lookup of key /System/NodeLiveness/2 found only non-matching ranges []; retrying
[08:42:54][Step 2/2] I181018 08:39:05.383158 79191 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 027512a6 at applied index 16
[08:42:54][Step 2/2] I181018 08:39:05.384919 79191 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:05.386465 79468 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=027512a6, encoded size=8298, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:39:05.389693 79468 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:05.393941 79191 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:05.406022 79191 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=88988997] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] --- PASS: TestRangeLimitTxnMaxTimestamp (0.37s)
[08:42:54][Step 2/2] === RUN TestLeaseMetricsOnSplitAndTransfer
[08:42:54][Step 2/2] I181018 08:39:05.638867 79573 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:33645" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:05.682024 79573 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:05.682986 79573 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:39137" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:05.684864 79795 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:33645
[08:42:54][Step 2/2] I181018 08:39:05.704645 79573 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot a2353c4b at applied index 15
[08:42:54][Step 2/2] I181018 08:39:05.706063 79573 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 48, log entries: 5, rate-limit: 2.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:39:05.708168 79678 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 15 (id=a2353c4b, encoded size=8165, 1 rocksdb batches, 5 log entries)
[08:42:54][Step 2/2] I181018 08:39:05.710979 79678 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:05.714306 79573 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:05.723014 79573 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=14cfabe5] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:39:05.883349 79573 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r2]
[08:42:54][Step 2/2] I181018 08:39:05.926648 79573 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:54][Step 2/2] I181018 08:39:05.958432 79609 storage/replica_proposal.go:212 [s1,r2/1:{a-/Max}] new range lease repl=(n1,s1):1 seq=2 start=0.000000123,227 epo=1 pro=1.800000125,34 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,6 pro=0.000000123,8
[08:42:54][Step 2/2] --- PASS: TestLeaseMetricsOnSplitAndTransfer (0.45s)
[08:42:54][Step 2/2] === RUN TestLeaseNotUsedAfterRestart
[08:42:54][Step 2/2] I181018 08:39:06.103197 79822 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:44447" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:06.119342 79822 storage/client_replica_test.go:1106 restarting
[08:42:54][Step 2/2] --- PASS: TestLeaseNotUsedAfterRestart (0.15s)
[08:42:54][Step 2/2] === RUN TestLeaseExtensionNotBlockedByRead
[08:42:54][Step 2/2] W181018 08:39:06.232510 80060 server/status/runtime.go:295 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
[08:42:54][Step 2/2] I181018 08:39:06.265282 80060 server/server.go:851 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
[08:42:54][Step 2/2] I181018 08:39:06.265799 80060 base/addr_validation.go:279 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
[08:42:54][Step 2/2] I181018 08:39:06.265901 80060 base/addr_validation.go:319 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
[08:42:54][Step 2/2] I181018 08:39:06.271045 80060 server/config.go:493 [n?] 1 storage engine initialized
[08:42:54][Step 2/2] I181018 08:39:06.271225 80060 server/config.go:496 [n?] RocksDB cache size: 128 MiB
[08:42:54][Step 2/2] I181018 08:39:06.271274 80060 server/config.go:496 [n?] store 0: in-memory, size 0 B
[08:42:54][Step 2/2] I181018 08:39:06.352770 80060 server/node.go:371 [n?] **** cluster d9f5f939-f049-42d0-bbe8-a708218c879c has been created
[08:42:54][Step 2/2] I181018 08:39:06.352960 80060 server/server.go:1397 [n?] **** add additional nodes by specifying --join=127.0.0.1:39463
[08:42:54][Step 2/2] I181018 08:39:06.354131 80060 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:39463" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:1 patch:0 unstable:1 > build_tag:"v2.2.0-alpha.00000000-1771-g310a049" started_at:1539851946353808255
[08:42:54][Step 2/2] I181018 08:39:06.370437 80060 server/node.go:475 [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=7.1 KiB), ranges=1, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=7221.00 p25=7221.00 p50=7221.00 p75=7221.00 p90=7221.00 pMax=7221.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:54][Step 2/2] I181018 08:39:06.370921 80060 storage/stores.go:242 [n1] read 0 node addresses from persistent storage
[08:42:54][Step 2/2] I181018 08:39:06.371848 80060 server/node.go:698 [n1] connecting to gossip network to verify cluster ID...
[08:42:54][Step 2/2] I181018 08:39:06.372076 80060 server/node.go:723 [n1] node connected via gossip and verified as part of cluster "d9f5f939-f049-42d0-bbe8-a708218c879c"
[08:42:54][Step 2/2] I181018 08:39:06.372441 80060 server/node.go:547 [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
[08:42:54][Step 2/2] I181018 08:39:06.373742 80060 server/status/recorder.go:610 [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
[08:42:54][Step 2/2] I181018 08:39:06.373814 80060 server/server.go:1822 [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
[08:42:54][Step 2/2] I181018 08:39:06.376305 80060 server/server.go:1529 [n1] starting https server at 127.0.0.1:44765 (use: 127.0.0.1:44765)
[08:42:54][Step 2/2] I181018 08:39:06.376457 80060 server/server.go:1531 [n1] starting grpc/postgres server at 127.0.0.1:39463
[08:42:54][Step 2/2] I181018 08:39:06.376511 80060 server/server.go:1532 [n1] advertising CockroachDB node at 127.0.0.1:39463
[08:42:54][Step 2/2] W181018 08:39:06.376966 80060 jobs/registry.go:317 [n1] unable to get node liveness: node not in the liveness table
[08:42:54][Step 2/2] I181018 08:39:06.404571 80253 storage/replica_command.go:300 [n1,split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
[08:42:54][Step 2/2] I181018 08:39:06.517449 80324 storage/replica_command.go:300 [n1,split,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
[08:42:54][Step 2/2] W181018 08:39:06.526186 80309 storage/intent_resolver.go:675 [n1,s1] failed to push during intent resolution: failed to push "unnamed" id=d250c42f key=/Table/SystemConfigSpan/Start rw=true pri=0.02096682 iso=SERIALIZABLE stat=PENDING epo=0 ts=1539851946.436072765,0 orig=1539851946.436072765,0 max=1539851946.436072765,0 wto=false rop=false seq=6
[08:42:54][Step 2/2] I181018 08:39:06.574418 80166 sql/event_log.go:126 [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
[08:42:54][Step 2/2] I181018 08:39:06.660605 80312 storage/replica_command.go:300 [n1,split,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
[08:42:54][Step 2/2] I181018 08:39:06.744623 80283 storage/replica_command.go:300 [n1,split,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
[08:42:54][Step 2/2] I181018 08:39:06.851945 80296 storage/replica_command.go:300 [n1,split,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
[08:42:54][Step 2/2] I181018 08:39:06.902172 80279 sql/event_log.go:126 [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.1-1 User:root}
[08:42:54][Step 2/2] I181018 08:39:06.963983 80301 storage/replica_command.go:300 [n1,split,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
[08:42:54][Step 2/2] I181018 08:39:07.048282 80303 storage/replica_command.go:300 [n1,split,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
[08:42:54][Step 2/2] I181018 08:39:07.083452 80289 sql/event_log.go:126 [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
[08:42:54][Step 2/2] I181018 08:39:07.134781 79101 storage/replica_command.go:300 [n1,split,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
[08:42:54][Step 2/2] I181018 08:39:07.195522 80175 storage/replica_command.go:300 [n1,split,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
[08:42:54][Step 2/2] I181018 08:39:07.262624 80318 storage/replica_command.go:300 [n1,split,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
[08:42:54][Step 2/2] I181018 08:39:07.299819 80176 sql/event_log.go:126 [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:b5ff45e6-bfc9-4f8a-9c65-a743f7accb79 User:root}
[08:42:54][Step 2/2] I181018 08:39:07.325094 80409 storage/replica_command.go:300 [n1,split,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
[08:42:54][Step 2/2] I181018 08:39:07.401480 80350 storage/replica_command.go:300 [n1,split,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
[08:42:54][Step 2/2] I181018 08:39:07.424067 80436 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
[08:42:54][Step 2/2] I181018 08:39:07.489245 80418 storage/replica_command.go:300 [n1,split,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
[08:42:54][Step 2/2] I181018 08:39:07.519848 80352 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
[08:42:54][Step 2/2] I181018 08:39:07.599110 80060 server/server.go:1585 [n1] done ensuring all necessary migrations have run
[08:42:54][Step 2/2] I181018 08:39:07.599319 80060 server/server.go:1588 [n1] serving sql connections
[08:42:54][Step 2/2] I181018 08:39:07.626120 80489 storage/replica_command.go:300 [n1,split,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
[08:42:54][Step 2/2] I181018 08:39:07.663907 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.667860 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.669615 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.676289 80457 sql/event_log.go:126 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:39463} Attrs: Locality: ServerVersion:2.1-1 BuildTag:v2.2.0-alpha.00000000-1771-g310a049 StartedAt:1539851946353808255 LocalityAddress:[]} ClusterID:d9f5f939-f049-42d0-bbe8-a708218c879c StartedAt:1539851946353808255 LastUp:1539851946353808255}
[08:42:54][Step 2/2] I181018 08:39:07.678735 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.689157 80454 server/server_update.go:68 [n1] no need to upgrade, cluster already at the newest version
[08:42:54][Step 2/2] I181018 08:39:07.689799 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.690934 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.691653 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.692240 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.692839 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.693378 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.694017 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.694536 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.699776 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.700935 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.701936 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.702836 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.703936 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.704904 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.705932 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.707144 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.708746 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.711556 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.719410 80060 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.759669 80384 storage/replica_command.go:300 [n1,split,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
[08:42:54][Step 2/2] I181018 08:39:07.766258 80060 server/testserver.go:427 had 15 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.776893 80060 server/testserver.go:427 had 15 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.795306 80060 server/testserver.go:427 had 15 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.819507 80511 storage/replica_command.go:300 [n1,split,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
[08:42:54][Step 2/2] I181018 08:39:07.834531 80060 server/testserver.go:427 had 16 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.836453 80100 gossip/gossip.go:1510 [n1] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:39:07.837380 80100 storage/stores.go:261 [n1] wrote 0 node addresses to persistent storage
[08:42:54][Step 2/2] I181018 08:39:07.891687 80476 storage/replica_command.go:300 [n1,split,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
[08:42:54][Step 2/2] I181018 08:39:07.910345 80060 server/testserver.go:427 had 17 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:07.962914 80579 storage/replica_command.go:300 [n1,split,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
[08:42:54][Step 2/2] I181018 08:39:08.034284 80463 storage/replica_command.go:300 [n1,split,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
[08:42:54][Step 2/2] I181018 08:39:08.064433 80060 server/testserver.go:427 had 19 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:08.348575 80060 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 node.Node: batch
[08:42:54][Step 2/2] 1 [async] transport racer
[08:42:54][Step 2/2] 1 [async] closedts-rangefeed-subscriber
[08:42:54][Step 2/2] I181018 08:39:08.351892 80060 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 node.Node: batch
[08:42:54][Step 2/2] 1 [async] transport racer
[08:42:54][Step 2/2] I181018 08:39:08.352542 80060 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] transport racer
[08:42:54][Step 2/2] I181018 08:39:08.499558 79794 kv/transport_race.go:113 transport race promotion: ran 35 iterations on up to 754 requests
[08:42:54][Step 2/2] --- PASS: TestLeaseExtensionNotBlockedByRead (2.34s)
[08:42:54][Step 2/2] === RUN TestLeaseInfoRequest
[08:42:54][Step 2/2] W181018 08:39:08.584385 80427 server/status/runtime.go:295 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
[08:42:54][Step 2/2] I181018 08:39:08.620252 80427 server/server.go:851 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
[08:42:54][Step 2/2] I181018 08:39:08.620861 80427 base/addr_validation.go:279 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
[08:42:54][Step 2/2] I181018 08:39:08.620966 80427 base/addr_validation.go:319 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
[08:42:54][Step 2/2] I181018 08:39:08.626351 80427 server/config.go:493 [n?] 1 storage engine initialized
[08:42:54][Step 2/2] I181018 08:39:08.626498 80427 server/config.go:496 [n?] RocksDB cache size: 128 MiB
[08:42:54][Step 2/2] I181018 08:39:08.626547 80427 server/config.go:496 [n?] store 0: in-memory, size 0 B
[08:42:54][Step 2/2] I181018 08:39:08.709977 80427 server/node.go:371 [n?] **** cluster 030388b7-c5a5-4a4f-8219-e0d7b8e388f0 has been created
[08:42:54][Step 2/2] I181018 08:39:08.710171 80427 server/server.go:1397 [n?] **** add additional nodes by specifying --join=127.0.0.1:36285
[08:42:54][Step 2/2] I181018 08:39:08.711789 80427 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:36285" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:1 patch:0 unstable:1 > build_tag:"v2.2.0-alpha.00000000-1771-g310a049" started_at:1539851948711380578
[08:42:54][Step 2/2] I181018 08:39:08.736861 80427 server/node.go:475 [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=7.1 KiB), ranges=1, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=7221.00 p25=7221.00 p50=7221.00 p75=7221.00 p90=7221.00 pMax=7221.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:54][Step 2/2] I181018 08:39:08.737427 80427 storage/stores.go:242 [n1] read 0 node addresses from persistent storage
[08:42:54][Step 2/2] I181018 08:39:08.738641 80427 server/node.go:698 [n1] connecting to gossip network to verify cluster ID...
[08:42:54][Step 2/2] I181018 08:39:08.738984 80427 server/node.go:723 [n1] node connected via gossip and verified as part of cluster "030388b7-c5a5-4a4f-8219-e0d7b8e388f0"
[08:42:54][Step 2/2] I181018 08:39:08.739609 80427 server/node.go:547 [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
[08:42:54][Step 2/2] I181018 08:39:08.741036 80427 server/status/recorder.go:610 [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
[08:42:54][Step 2/2] I181018 08:39:08.741195 80427 server/server.go:1822 [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
[08:42:54][Step 2/2] I181018 08:39:08.744058 80427 server/server.go:1529 [n1] starting https server at 127.0.0.1:45417 (use: 127.0.0.1:45417)
[08:42:54][Step 2/2] I181018 08:39:08.744199 80427 server/server.go:1531 [n1] starting grpc/postgres server at 127.0.0.1:36285
[08:42:54][Step 2/2] I181018 08:39:08.744268 80427 server/server.go:1532 [n1] advertising CockroachDB node at 127.0.0.1:36285
[08:42:54][Step 2/2] I181018 08:39:08.902806 80853 sql/event_log.go:126 [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
[08:42:54][Step 2/2] I181018 08:39:09.080765 80883 sql/event_log.go:126 [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.1-1 User:root}
[08:42:54][Step 2/2] I181018 08:39:09.158324 80648 sql/event_log.go:126 [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
[08:42:54][Step 2/2] I181018 08:39:09.288997 80855 sql/event_log.go:126 [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:c8ab09ee-4994-4559-8d7b-abf06a8b6756 User:root}
[08:42:54][Step 2/2] I181018 08:39:09.343019 80858 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
[08:42:54][Step 2/2] I181018 08:39:09.376831 80886 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
[08:42:54][Step 2/2] I181018 08:39:09.399944 80427 server/server.go:1585 [n1] done ensuring all necessary migrations have run
[08:42:54][Step 2/2] I181018 08:39:09.400151 80427 server/server.go:1588 [n1] serving sql connections
[08:42:54][Step 2/2] I181018 08:39:09.423984 80870 server/server_update.go:68 [n1] no need to upgrade, cluster already at the newest version
[08:42:54][Step 2/2] I181018 08:39:09.429442 80873 sql/event_log.go:126 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:36285} Attrs: Locality: ServerVersion:2.1-1 BuildTag:v2.2.0-alpha.00000000-1771-g310a049 StartedAt:1539851948711380578 LocalityAddress:[]} ClusterID:030388b7-c5a5-4a4f-8219-e0d7b8e388f0 StartedAt:1539851948711380578 LastUp:1539851948711380578}
[08:42:54][Step 2/2] I181018 08:39:09.523835 80575 sql/event_log.go:126 [n1,client=127.0.0.1:49394,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:kv.range_merge.queue_enabled Value:false User:root}
[08:42:54][Step 2/2] W181018 08:39:09.579027 80427 server/status/runtime.go:295 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
[08:42:54][Step 2/2] I181018 08:39:09.630069 80427 server/server.go:851 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
[08:42:54][Step 2/2] I181018 08:39:09.630660 80427 base/addr_validation.go:279 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
[08:42:54][Step 2/2] I181018 08:39:09.630738 80427 base/addr_validation.go:319 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
[08:42:54][Step 2/2] I181018 08:39:09.636053 80427 server/config.go:493 [n?] 1 storage engine initialized
[08:42:54][Step 2/2] I181018 08:39:09.636221 80427 server/config.go:496 [n?] RocksDB cache size: 128 MiB
[08:42:54][Step 2/2] I181018 08:39:09.636270 80427 server/config.go:496 [n?] store 0: in-memory, size 0 B
[08:42:54][Step 2/2] W181018 08:39:09.636536 80427 gossip/gossip.go:1496 [n?] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:09.637387 80427 server/server.go:1402 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
[08:42:54][Step 2/2] I181018 08:39:09.724896 80732 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:36285
[08:42:54][Step 2/2] I181018 08:39:09.726212 80881 gossip/server.go:232 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:44417}
[08:42:54][Step 2/2] I181018 08:39:09.730903 80427 server/node.go:698 [n?] connecting to gossip network to verify cluster ID...
[08:42:54][Step 2/2] I181018 08:39:09.731358 80427 server/node.go:723 [n?] node connected via gossip and verified as part of cluster "030388b7-c5a5-4a4f-8219-e0d7b8e388f0"
[08:42:54][Step 2/2] I181018 08:39:09.733204 80881 gossip/server.go:232 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:44417}
[08:42:54][Step 2/2] I181018 08:39:09.758963 80427 server/node.go:426 [n?] new node allocated ID 2
[08:42:54][Step 2/2] I181018 08:39:09.759716 80427 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:44417" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:1 patch:0 unstable:1 > build_tag:"v2.2.0-alpha.00000000-1771-g310a049" started_at:1539851949759289326
[08:42:54][Step 2/2] I181018 08:39:09.760995 80427 storage/stores.go:242 [n2] read 0 node addresses from persistent storage
[08:42:54][Step 2/2] I181018 08:39:09.761438 80427 storage/stores.go:261 [n2] wrote 1 node addresses to persistent storage
[08:42:54][Step 2/2] I181018 08:39:09.763922 80430 storage/stores.go:261 [n1] wrote 1 node addresses to persistent storage
[08:42:54][Step 2/2] I181018 08:39:09.794419 80427 server/node.go:673 [n2] bootstrapped store [n2,s2]
[08:42:54][Step 2/2] I181018 08:39:09.798758 80427 server/node.go:547 [n2] node=2: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
[08:42:54][Step 2/2] I181018 08:39:09.800257 80427 server/status/recorder.go:610 [n2] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
[08:42:54][Step 2/2] I181018 08:39:09.800436 80427 server/server.go:1822 [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
[08:42:54][Step 2/2] I181018 08:39:09.812785 80427 server/server.go:1529 [n2] starting https server at 127.0.0.1:46315 (use: 127.0.0.1:46315)
[08:42:54][Step 2/2] I181018 08:39:09.812998 80427 server/server.go:1531 [n2] starting grpc/postgres server at 127.0.0.1:44417
[08:42:54][Step 2/2] I181018 08:39:09.813059 80427 server/server.go:1532 [n2] advertising CockroachDB node at 127.0.0.1:44417
[08:42:54][Step 2/2] I181018 08:39:09.847159 80427 server/server.go:1585 [n2] done ensuring all necessary migrations have run
[08:42:54][Step 2/2] I181018 08:39:09.847449 80427 server/server.go:1588 [n2] serving sql connections
[08:42:54][Step 2/2] I181018 08:39:09.890109 80955 storage/replica_consistency.go:127 [n1,consistencyChecker,s1,r1/1:/M{in-ax}] triggering stats recomputation to resolve delta of {ContainsEstimates:true LastUpdateNanos:1539851949486082702 IntentAge:0 GCBytesAge:0 LiveBytes:-22301 LiveCount:-466 KeyBytes:-21706 KeyCount:-466 ValBytes:-595 ValCount:-466 IntentBytes:0 IntentCount:0 SysBytes:0 SysCount:0}
[08:42:54][Step 2/2] W181018 08:39:09.936790 80427 server/status/runtime.go:295 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
[08:42:54][Step 2/2] I181018 08:39:10.013069 81082 sql/event_log.go:126 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:44417} Attrs: Locality: ServerVersion:2.1-1 BuildTag:v2.2.0-alpha.00000000-1771-g310a049 StartedAt:1539851949759289326 LocalityAddress:[]} ClusterID:030388b7-c5a5-4a4f-8219-e0d7b8e388f0 StartedAt:1539851949759289326 LastUp:1539851949759289326}
[08:42:54][Step 2/2] I181018 08:39:10.018602 80427 server/server.go:851 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
[08:42:54][Step 2/2] I181018 08:39:10.021337 80427 base/addr_validation.go:279 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
[08:42:54][Step 2/2] I181018 08:39:10.021521 80427 base/addr_validation.go:319 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
[08:42:54][Step 2/2] I181018 08:39:10.027615 80427 server/config.go:493 [n?] 1 storage engine initialized
[08:42:54][Step 2/2] I181018 08:39:10.027774 80427 server/config.go:496 [n?] RocksDB cache size: 128 MiB
[08:42:54][Step 2/2] I181018 08:39:10.027825 80427 server/config.go:496 [n?] store 0: in-memory, size 0 B
[08:42:54][Step 2/2] W181018 08:39:10.028144 80427 gossip/gossip.go:1496 [n?] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:10.033854 80427 server/server.go:1402 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
[08:42:54][Step 2/2] I181018 08:39:10.036909 81079 server/server_update.go:68 [n2] no need to upgrade, cluster already at the newest version
[08:42:54][Step 2/2] I181018 08:39:10.121362 81201 gossip/client.go:129 [n?] started gossip client to 127.0.0.1:36285
[08:42:54][Step 2/2] I181018 08:39:10.126869 81122 gossip/server.go:232 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:33167}
[08:42:54][Step 2/2] I181018 08:39:10.130994 80427 server/node.go:698 [n?] connecting to gossip network to verify cluster ID...
[08:42:54][Step 2/2] I181018 08:39:10.131354 80427 server/node.go:723 [n?] node connected via gossip and verified as part of cluster "030388b7-c5a5-4a4f-8219-e0d7b8e388f0"
[08:42:54][Step 2/2] I181018 08:39:10.135877 81122 gossip/server.go:232 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:33167}
[08:42:54][Step 2/2] I181018 08:39:10.153499 80427 server/node.go:426 [n?] new node allocated ID 3
[08:42:54][Step 2/2] I181018 08:39:10.154140 80427 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:33167" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:1 patch:0 unstable:1 > build_tag:"v2.2.0-alpha.00000000-1771-g310a049" started_at:1539851950153764867
[08:42:54][Step 2/2] I181018 08:39:10.155280 80427 storage/stores.go:242 [n3] read 0 node addresses from persistent storage
[08:42:54][Step 2/2] I181018 08:39:10.155874 80427 storage/stores.go:261 [n3] wrote 2 node addresses to persistent storage
[08:42:54][Step 2/2] I181018 08:39:10.160972 80430 storage/stores.go:261 [n1] wrote 2 node addresses to persistent storage
[08:42:54][Step 2/2] I181018 08:39:10.162513 80823 storage/stores.go:261 [n2] wrote 2 node addresses to persistent storage
[08:42:54][Step 2/2] I181018 08:39:10.182761 80427 server/node.go:673 [n3] bootstrapped store [n3,s3]
[08:42:54][Step 2/2] I181018 08:39:10.185505 80427 server/node.go:547 [n3] node=3: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
[08:42:54][Step 2/2] I181018 08:39:10.186960 80427 server/status/recorder.go:610 [n3] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
[08:42:54][Step 2/2] I181018 08:39:10.187133 80427 server/server.go:1822 [n3] Could not start heap profiler worker due to: directory to store profiles could not be determined
[08:42:54][Step 2/2] I181018 08:39:10.190299 80427 server/server.go:1529 [n3] starting https server at 127.0.0.1:37649 (use: 127.0.0.1:37649)
[08:42:54][Step 2/2] I181018 08:39:10.190478 80427 server/server.go:1531 [n3] starting grpc/postgres server at 127.0.0.1:33167
[08:42:54][Step 2/2] I181018 08:39:10.190617 80427 server/server.go:1532 [n3] advertising CockroachDB node at 127.0.0.1:33167
[08:42:54][Step 2/2] I181018 08:39:10.197150 80427 server/server.go:1585 [n3] done ensuring all necessary migrations have run
[08:42:54][Step 2/2] I181018 08:39:10.197355 80427 server/server.go:1588 [n3] serving sql connections
[08:42:54][Step 2/2] I181018 08:39:10.257598 80427 storage/store_snapshot.go:621 [n1,s1,r1/1:/M{in-ax}] sending preemptive snapshot c2dd977e at applied index 84
[08:42:54][Step 2/2] I181018 08:39:10.291266 80427 storage/store_snapshot.go:664 [n1,s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 1073, log entries: 5, rate-limit: 2.0 MiB/sec, 47ms
[08:42:54][Step 2/2] I181018 08:39:10.293116 81163 storage/replica_raftstorage.go:804 [n2,s2,r1/?:{-}] applying preemptive snapshot at index 84 (id=c2dd977e, encoded size=200653, 1 rocksdb batches, 5 log entries)
[08:42:54][Step 2/2] I181018 08:39:10.326603 81163 storage/replica_raftstorage.go:810 [n2,s2,r1/?:/M{in-ax}] applied preemptive snapshot in 33ms [clear=0ms batch=1ms entries=30ms commit=1ms]
[08:42:54][Step 2/2] I181018 08:39:10.330934 80427 storage/replica_command.go:816 [n1,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:10.344741 81223 server/server_update.go:68 [n3] no need to upgrade, cluster already at the newest version
[08:42:54][Step 2/2] I181018 08:39:10.351902 81226 sql/event_log.go:126 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:33167} Attrs: Locality: ServerVersion:2.1-1 BuildTag:v2.2.0-alpha.00000000-1771-g310a049 StartedAt:1539851950153764867 LocalityAddress:[]} ClusterID:030388b7-c5a5-4a4f-8219-e0d7b8e388f0 StartedAt:1539851950153764867 LastUp:1539851950153764867}
[08:42:54][Step 2/2] I181018 08:39:10.362846 80427 storage/replica.go:3884 [n1,s1,r1/1:/M{in-ax}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:39:10.424669 80427 storage/store_snapshot.go:621 [n1,s1,r1/1:/M{in-ax}] sending preemptive snapshot 0fb494bc at applied index 90
[08:42:54][Step 2/2] I181018 08:39:10.444846 80427 storage/store_snapshot.go:664 [n1,s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 1087, log entries: 11, rate-limit: 2.0 MiB/sec, 24ms
[08:42:54][Step 2/2] I181018 08:39:10.450897 81398 storage/replica_raftstorage.go:804 [n3,s3,r1/?:{-}] applying preemptive snapshot at index 90 (id=0fb494bc, encoded size=242318, 1 rocksdb batches, 11 log entries)
[08:42:54][Step 2/2] I181018 08:39:10.500480 81398 storage/replica_raftstorage.go:810 [n3,s3,r1/?:/M{in-ax}] applied preemptive snapshot in 49ms [clear=0ms batch=1ms entries=45ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:39:10.503988 80427 storage/replica_command.go:816 [n1,s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:39:10.560187 80427 storage/replica.go:3884 [n1,s1,r1/1:/M{in-ax}] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] I181018 08:39:10.595099 80643 gossip/gossip.go:1510 [n1] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:39:10.596398 80643 storage/stores.go:261 [n1] wrote 2 node addresses to persistent storage
[08:42:54][Step 2/2] I181018 08:39:11.158067 80942 gossip/gossip.go:1510 [n2] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:39:11.159370 80942 storage/stores.go:261 [n2] wrote 2 node addresses to persistent storage
[08:42:54][Step 2/2] I181018 08:39:11.732081 81500 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 2 [async] closedts-subscription
[08:42:54][Step 2/2] 1 [async] closedts-rangefeed-subscriber
[08:42:54][Step 2/2] W181018 08:39:11.733006 81279 storage/store.go:1671 [n3,s3,r1/3:/M{in-ax}] unable to gossip on capacity change: node unavailable; try another peer
[08:42:54][Step 2/2] I181018 08:39:11.734950 81498 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 2 [async] closedts-subscription
[08:42:54][Step 2/2] 1 [async] transport racer
[08:42:54][Step 2/2] 1 [async] closedts-rangefeed-subscriber
[08:42:54][Step 2/2] I181018 08:39:11.737644 81499 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 2 [async] closedts-subscription
[08:42:54][Step 2/2] 1 [async] closedts-rangefeed-subscriber
[08:42:54][Step 2/2] W181018 08:39:11.738977 81437 storage/raft_transport.go:584 [n3] while processing outgoing Raft queue to node 2: EOF:
[08:42:54][Step 2/2] W181018 08:39:11.739827 81479 storage/raft_transport.go:584 [n2] while processing outgoing Raft queue to node 3: EOF:
[08:42:54][Step 2/2] W181018 08:39:11.746221 81433 storage/raft_transport.go:584 [n3] while processing outgoing Raft queue to node 1: rpc error: code = Canceled desc = grpc: the client connection is closing:
[08:42:54][Step 2/2] W181018 08:39:11.751176 81424 storage/raft_transport.go:584 [n1] while processing outgoing Raft queue to node 2: EOF:
[08:42:54][Step 2/2] W181018 08:39:11.753545 81385 storage/raft_transport.go:584 [n1] while processing outgoing Raft queue to node 3: rpc error: code = Unavailable desc = transport is closing:
[08:42:54][Step 2/2] W181018 08:39:11.755258 81430 storage/raft_transport.go:584 [n2] while processing outgoing Raft queue to node 1: rpc error: code = Canceled desc = grpc: the client connection is closing:
[08:42:54][Step 2/2] I181018 08:39:11.757301 81500 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] closedts-subscription
[08:42:54][Step 2/2] 1 [async] closedts-rangefeed-subscriber
[08:42:54][Step 2/2] W181018 08:39:11.757365 80942 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:11.757763 81500 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] closedts-subscription
[08:42:54][Step 2/2] I181018 08:39:11.758903 81498 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] transport racer
[08:42:54][Step 2/2] 1 [async] closedts-subscription
[08:42:54][Step 2/2] I181018 08:39:11.759099 81499 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 2 [async] closedts-subscription
[08:42:54][Step 2/2] I181018 08:39:11.759318 81498 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] transport racer
[08:42:54][Step 2/2] I181018 08:39:11.759512 81499 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] closedts-subscription
[08:42:54][Step 2/2] I181018 08:39:11.783512 81502 rpc/nodedialer/nodedialer.go:91 [ct-client] unable to connect to n2: context canceled
[08:42:54][Step 2/2] I181018 08:39:11.784009 81501 rpc/nodedialer/nodedialer.go:91 [ct-client] unable to connect to n1: context canceled
[08:42:54][Step 2/2] I181018 08:39:11.790866 81441 rpc/nodedialer/nodedialer.go:91 [ct-client] unable to connect to n3: context canceled
[08:42:54][Step 2/2] I181018 08:39:11.792073 81442 rpc/nodedialer/nodedialer.go:91 [ct-client] unable to connect to n1: context canceled
[08:42:54][Step 2/2] I181018 08:39:11.842646 80851 kv/transport_race.go:113 transport race promotion: ran 46 iterations on up to 372 requests
[08:42:54][Step 2/2] --- PASS: TestLeaseInfoRequest (3.36s)
[08:42:54][Step 2/2] === RUN TestErrorHandlingForNonKVCommand
[08:42:54][Step 2/2] W181018 08:39:11.928713 81467 server/status/runtime.go:295 [n?] Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
[08:42:54][Step 2/2] I181018 08:39:11.957892 81467 server/server.go:851 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
[08:42:54][Step 2/2] I181018 08:39:11.958508 81467 base/addr_validation.go:279 [n?] server certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
[08:42:54][Step 2/2] I181018 08:39:11.958605 81467 base/addr_validation.go:319 [n?] web UI certificate addresses: IP=127.0.0.1,::1; DNS=localhost,*.local; CN=node
[08:42:54][Step 2/2] I181018 08:39:11.964272 81467 server/config.go:493 [n?] 1 storage engine initialized
[08:42:54][Step 2/2] I181018 08:39:11.964429 81467 server/config.go:496 [n?] RocksDB cache size: 128 MiB
[08:42:54][Step 2/2] I181018 08:39:11.964484 81467 server/config.go:496 [n?] store 0: in-memory, size 0 B
[08:42:54][Step 2/2] I181018 08:39:12.015754 81467 server/node.go:371 [n?] **** cluster 6df94670-aac2-4aaf-90c5-e44239a671e4 has been created
[08:42:54][Step 2/2] I181018 08:39:12.015920 81467 server/server.go:1397 [n?] **** add additional nodes by specifying --join=127.0.0.1:40409
[08:42:54][Step 2/2] I181018 08:39:12.017088 81467 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:40409" > attrs:<> locality:<> ServerVersion:<major_val:2 minor_val:1 patch:0 unstable:1 > build_tag:"v2.2.0-alpha.00000000-1771-g310a049" started_at:1539851952016753146
[08:42:54][Step 2/2] I181018 08:39:12.033471 81467 server/node.go:475 [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=7.1 KiB), ranges=1, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=7221.00 p25=7221.00 p50=7221.00 p75=7221.00 p90=7221.00 pMax=7221.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
[08:42:54][Step 2/2] I181018 08:39:12.033839 81467 storage/stores.go:242 [n1] read 0 node addresses from persistent storage
[08:42:54][Step 2/2] I181018 08:39:12.034610 81467 server/node.go:698 [n1] connecting to gossip network to verify cluster ID...
[08:42:54][Step 2/2] I181018 08:39:12.034815 81467 server/node.go:723 [n1] node connected via gossip and verified as part of cluster "6df94670-aac2-4aaf-90c5-e44239a671e4"
[08:42:54][Step 2/2] I181018 08:39:12.035245 81467 server/node.go:547 [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
[08:42:54][Step 2/2] I181018 08:39:12.036447 81467 server/status/recorder.go:610 [n1] available memory from cgroups (8.0 EiB) exceeds system memory 16 GiB, using system memory
[08:42:54][Step 2/2] I181018 08:39:12.036565 81467 server/server.go:1822 [n1] Could not start heap profiler worker due to: directory to store profiles could not be determined
[08:42:54][Step 2/2] I181018 08:39:12.039127 81467 server/server.go:1529 [n1] starting https server at 127.0.0.1:44221 (use: 127.0.0.1:44221)
[08:42:54][Step 2/2] I181018 08:39:12.039291 81467 server/server.go:1531 [n1] starting grpc/postgres server at 127.0.0.1:40409
[08:42:54][Step 2/2] I181018 08:39:12.039346 81467 server/server.go:1532 [n1] advertising CockroachDB node at 127.0.0.1:40409
[08:42:54][Step 2/2] W181018 08:39:12.039548 81467 jobs/registry.go:317 [n1] unable to get node liveness: node not in the liveness table
[08:42:54][Step 2/2] I181018 08:39:12.060758 81393 storage/replica_command.go:300 [n1,split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
[08:42:54][Step 2/2] I181018 08:39:12.145291 81798 storage/replica_command.go:300 [n1,split,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
[08:42:54][Step 2/2] W181018 08:39:12.152382 81796 storage/intent_resolver.go:675 [n1,s1] failed to push during intent resolution: failed to push "unnamed" id=c38ec292 key=/Table/SystemConfigSpan/Start rw=true pri=0.00007004 iso=SERIALIZABLE stat=PENDING epo=0 ts=1539851952.080170728,0 orig=1539851952.080170728,0 max=1539851952.080170728,0 wto=false rop=false seq=6
[08:42:54][Step 2/2] I181018 08:39:12.221851 81455 sql/event_log.go:126 [n1,intExec=optInToDiagnosticsStatReporting] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:root}
[08:42:54][Step 2/2] I181018 08:39:12.318749 81512 storage/replica_command.go:300 [n1,split,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
[08:42:54][Step 2/2] I181018 08:39:12.442195 81805 storage/replica_command.go:300 [n1,split,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
[08:42:54][Step 2/2] I181018 08:39:12.558169 81803 sql/event_log.go:126 [n1,intExec=set-setting] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:2.1-1 User:root}
[08:42:54][Step 2/2] I181018 08:39:12.570846 81488 storage/replica_command.go:300 [n1,split,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
[08:42:54][Step 2/2] I181018 08:39:12.662637 81772 storage/replica_command.go:300 [n1,split,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
[08:42:54][Step 2/2] I181018 08:39:12.703403 81768 sql/event_log.go:126 [n1,intExec=disableNetTrace] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:root}
[08:42:54][Step 2/2] I181018 08:39:12.753384 81783 storage/replica_command.go:300 [n1,split,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
[08:42:54][Step 2/2] I181018 08:39:12.856703 81789 storage/replica_command.go:300 [n1,split,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
[08:42:54][Step 2/2] I181018 08:39:12.974723 81834 storage/replica_command.go:300 [n1,split,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
[08:42:54][Step 2/2] I181018 08:39:12.998313 81739 sql/event_log.go:126 [n1,intExec=initializeClusterSecret] Event: "set_cluster_setting", target: 0, info: {SettingName:cluster.secret Value:e3f1a4c9-df5a-4647-b234-de5f10adbe33 User:root}
[08:42:54][Step 2/2] I181018 08:39:13.107286 81907 storage/replica_command.go:300 [n1,split,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
[08:42:54][Step 2/2] I181018 08:39:13.129920 81863 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 50, info: {DatabaseName:defaultdb Statement:CREATE DATABASE IF NOT EXISTS defaultdb User:root}
[08:42:54][Step 2/2] I181018 08:39:13.178423 81923 storage/replica_command.go:300 [n1,split,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
[08:42:54][Step 2/2] I181018 08:39:13.214337 81893 sql/event_log.go:126 [n1,intExec=create-default-db] Event: "create_database", target: 51, info: {DatabaseName:postgres Statement:CREATE DATABASE IF NOT EXISTS postgres User:root}
[08:42:54][Step 2/2] I181018 08:39:13.265499 81928 storage/replica_command.go:300 [n1,split,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
[08:42:54][Step 2/2] I181018 08:39:13.282264 81467 server/server.go:1585 [n1] done ensuring all necessary migrations have run
[08:42:54][Step 2/2] I181018 08:39:13.282477 81467 server/server.go:1588 [n1] serving sql connections
[08:42:54][Step 2/2] I181018 08:39:13.324424 81467 server/testserver.go:427 had 12 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.325458 81467 server/testserver.go:427 had 12 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.326375 81467 server/testserver.go:427 had 12 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.327536 81467 server/testserver.go:427 had 12 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.329097 81467 server/testserver.go:427 had 12 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.372553 81852 storage/replica_command.go:300 [n1,split,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
[08:42:54][Step 2/2] I181018 08:39:13.400319 81932 sql/event_log.go:126 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:40409} Attrs: Locality: ServerVersion:2.1-1 BuildTag:v2.2.0-alpha.00000000-1771-g310a049 StartedAt:1539851952016753146 LocalityAddress:[]} ClusterID:6df94670-aac2-4aaf-90c5-e44239a671e4 StartedAt:1539851952016753146 LastUp:1539851952016753146}
[08:42:54][Step 2/2] I181018 08:39:13.401645 81467 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.403024 81929 server/server_update.go:68 [n1] no need to upgrade, cluster already at the newest version
[08:42:54][Step 2/2] I181018 08:39:13.419835 81467 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.421033 81467 server/testserver.go:427 had 13 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.456322 81890 storage/replica_command.go:300 [n1,split,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
[08:42:54][Step 2/2] I181018 08:39:13.465128 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.473651 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.475612 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.476749 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.477951 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.482261 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.483485 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.484526 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.486445 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.487710 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.490400 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.491553 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.493386 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.496414 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.499416 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.504738 81467 server/testserver.go:427 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.534425 81918 storage/replica_command.go:300 [n1,split,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
[08:42:54][Step 2/2] I181018 08:39:13.543359 81467 server/testserver.go:427 had 15 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.562911 81467 server/testserver.go:427 had 15 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.612745 81874 storage/replica_command.go:300 [n1,split,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
[08:42:54][Step 2/2] I181018 08:39:13.622785 81467 server/testserver.go:427 had 16 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.689044 82006 storage/replica_command.go:300 [n1,split,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
[08:42:54][Step 2/2] I181018 08:39:13.699217 81467 server/testserver.go:427 had 17 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:13.721522 81571 gossip/gossip.go:1510 [n1] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:39:13.722606 81571 storage/stores.go:261 [n1] wrote 0 node addresses to persistent storage
[08:42:54][Step 2/2] I181018 08:39:13.777268 81902 storage/replica_command.go:300 [n1,split,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
[08:42:54][Step 2/2] I181018 08:39:13.841597 81968 storage/replica_command.go:300 [n1,split,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
[08:42:54][Step 2/2] I181018 08:39:13.861976 81467 server/testserver.go:427 had 19 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:14.138468 81467 storage/replica_command.go:75 [n1,s1,r6/1:/{System/tse-Table/System…}] test injecting error: storage/client_replica_test.go:1349: injected error
[08:42:54][Step 2/2] I181018 08:39:14.139416 81484 kv/transport_race.go:113 transport race promotion: ran 28 iterations on up to 787 requests
[08:42:54][Step 2/2] I181018 08:39:14.139870 81467 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] transport racer
[08:42:54][Step 2/2] 1 [async] storage.consistencyChecker: processing replica
[08:42:54][Step 2/2] 1 [async] storage.Replica: computing checksum
[08:42:54][Step 2/2] 1 [async] storage.Replica: checking consistency
[08:42:54][Step 2/2] 1 [async] closedts-rangefeed-subscriber
[08:42:54][Step 2/2] I181018 08:39:14.142903 81467 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] storage.consistencyChecker: processing replica
[08:42:54][Step 2/2] 1 [async] storage.Replica: computing checksum
[08:42:54][Step 2/2] 1 [async] storage.Replica: checking consistency
[08:42:54][Step 2/2] 1 [async] closedts-rangefeed-subscriber
[08:42:54][Step 2/2] I181018 08:39:14.147239 81467 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] storage.Replica: computing checksum
[08:42:54][Step 2/2] 1 [async] closedts-rangefeed-subscriber
[08:42:54][Step 2/2] I181018 08:39:14.147873 81467 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] storage.Replica: computing checksum
[08:42:54][Step 2/2] --- PASS: TestErrorHandlingForNonKVCommand (2.38s)
[08:42:54][Step 2/2] === RUN TestRangeInfo
[08:42:54][Step 2/2] I181018 08:39:14.334221 81999 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:43145" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:14.395990 81999 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:14.397416 81999 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:41301" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:14.399301 82275 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:43145
[08:42:54][Step 2/2] I181018 08:39:14.456895 81999 storage/store_snapshot.go:621 [s1,r1/1:/M{in-ax}] sending preemptive snapshot 0d7d528e at applied index 15
[08:42:54][Step 2/2] I181018 08:39:14.459056 81999 storage/store_snapshot.go:664 [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 48, log entries: 5, rate-limit: 2.0 MiB/sec, 8ms
[08:42:54][Step 2/2] I181018 08:39:14.461538 82269 storage/replica_raftstorage.go:804 [s2,r1/?:{-}] applying preemptive snapshot at index 15 (id=0d7d528e, encoded size=8165, 1 rocksdb batches, 5 log entries)
[08:42:54][Step 2/2] I181018 08:39:14.467158 82269 storage/replica_raftstorage.go:810 [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=2ms commit=2ms]
[08:42:54][Step 2/2] I181018 08:39:14.473049 81999 storage/replica_command.go:816 [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:14.486934 81999 storage/replica.go:3884 [s1,r1/1:/M{in-ax},txn=6d88cefb] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] I181018 08:39:14.648220 81999 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r2]
[08:42:54][Step 2/2] I181018 08:39:14.734882 82208 storage/replica_proposal.go:212 [s2,r2/2:{a-/Max}] new range lease repl=(n2,s2):2 seq=2 start=0.000000123,274 epo=1 pro=0.000000123,275 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
[08:42:54][Step 2/2] W181018 08:39:14.769691 82309 storage/raft_transport.go:282 unable to accept Raft message from (n2,s2):2: no handler registered for (n1,s1):1
[08:42:54][Step 2/2] W181018 08:39:14.771540 82308 storage/store.go:3662 [s2,r2/2:{a-/Max}] raft error: node 1 claims to not contain store 1 for replica (n1,s1):1: store 1 was not found
[08:42:54][Step 2/2] W181018 08:39:14.771976 82034 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: store 1 was not found:
[08:42:54][Step 2/2] --- PASS: TestRangeInfo (0.54s)
[08:42:54][Step 2/2] === RUN TestDrainRangeRejection
[08:42:54][Step 2/2] I181018 08:39:14.884003 82017 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:35429" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:14.934809 82017 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:14.935820 82017 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:41933" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:14.937621 82473 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:35429
[08:42:54][Step 2/2] --- PASS: TestDrainRangeRejection (0.20s)
[08:42:54][Step 2/2] === RUN TestSystemZoneConfigs
[08:42:54][Step 2/2] --- SKIP: TestSystemZoneConfigs (0.01s)
[08:42:54][Step 2/2] client_replica_test.go:1597:
[08:42:54][Step 2/2] === RUN TestClearRange
[08:42:54][Step 2/2] I181018 08:39:15.016895 82456 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:15.107338 82336 storage/replica_command.go:300 [split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
[08:42:54][Step 2/2] I181018 08:39:15.132127 82491 storage/replica_command.go:300 [split,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
[08:42:54][Step 2/2] I181018 08:39:15.170200 82345 storage/replica_command.go:300 [split,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
[08:42:54][Step 2/2] W181018 08:39:15.210224 82695 storage/intent_resolver.go:675 [s1] failed to push during intent resolution: failed to push "split" id=ace610e4 key=/Local/Range/System/NodeLiveness/RangeDescriptor rw=true pri=0.01226291 iso=SERIALIZABLE stat=PENDING epo=0 ts=1539851955.170388223,0 orig=1539851955.170388223,0 max=1539851955.170388224,0 wto=false rop=false seq=1
[08:42:54][Step 2/2] I181018 08:39:15.244417 82482 storage/replica_command.go:300 [split,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
[08:42:54][Step 2/2] I181018 08:39:15.296278 82711 storage/replica_command.go:300 [split,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
[08:42:54][Step 2/2] I181018 08:39:15.339267 82716 storage/replica_command.go:300 [split,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
[08:42:54][Step 2/2] I181018 08:39:15.391597 82718 storage/replica_command.go:300 [split,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
[08:42:54][Step 2/2] I181018 08:39:15.595511 82561 storage/replica_command.go:300 [split,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
[08:42:54][Step 2/2] I181018 08:39:15.648094 82741 storage/replica_command.go:300 [split,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
[08:42:54][Step 2/2] I181018 08:39:15.690501 82351 storage/replica_command.go:300 [split,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
[08:42:54][Step 2/2] I181018 08:39:15.730135 82352 storage/replica_command.go:300 [split,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
[08:42:54][Step 2/2] I181018 08:39:15.777929 82701 storage/replica_command.go:300 [split,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
[08:42:54][Step 2/2] I181018 08:39:15.831665 82771 storage/replica_command.go:300 [split,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
[08:42:54][Step 2/2] I181018 08:39:15.883299 82685 storage/replica_command.go:300 [split,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
[08:42:54][Step 2/2] I181018 08:39:15.940412 82772 storage/replica_command.go:300 [split,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
[08:42:54][Step 2/2] I181018 08:39:16.003813 82803 storage/replica_command.go:300 [split,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
[08:42:54][Step 2/2] I181018 08:39:16.085027 82774 storage/replica_command.go:300 [split,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
[08:42:54][Step 2/2] E181018 08:39:16.088408 82286 storage/consistency_queue.go:128 [consistencyChecker,s1,r1/1:/{Min-System/}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:16.088960 82286 storage/queue.go:791 [consistencyChecker,s1,r1/1:/{Min-System/}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] I181018 08:39:16.129264 82821 storage/replica_command.go:300 [split,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
[08:42:54][Step 2/2] I181018 08:39:16.217440 82813 storage/replica_command.go:300 [split,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
[08:42:54][Step 2/2] --- PASS: TestClearRange (2.09s)
[08:42:54][Step 2/2] === RUN TestStoreRangeSplitAtIllegalKeys
[08:42:54][Step 2/2] I181018 08:39:17.113146 82751 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSplitAtIllegalKeys (0.09s)
[08:42:54][Step 2/2] === RUN TestStoreSplitAbortSpan
[08:42:54][Step 2/2] I181018 08:39:17.206853 82947 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:17.294286 82947 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
[08:42:54][Step 2/2] --- PASS: TestStoreSplitAbortSpan (0.13s)
[08:42:54][Step 2/2] === RUN TestStoreRangeSplitAtTablePrefix
[08:42:54][Step 2/2] I181018 08:39:17.335082 82735 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:17.391376 82735 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key /Table/50 [r2]
[08:42:54][Step 2/2] W181018 08:39:17.420805 82735 gossip/gossip.go:1119 [n1] raw gossip callback registered on system-db, consider using RegisterSystemConfigChannel
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSplitAtTablePrefix (0.10s)
[08:42:54][Step 2/2] === RUN TestStoreRangeSplitInsideRow
[08:42:54][Step 2/2] I181018 08:39:17.442443 83117 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:17.560036 83117 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key /Table/50/1/1/"a" [r2]
[08:42:54][Step 2/2] I181018 08:39:17.575712 83117 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] storage.replicate: processing replica
[08:42:54][Step 2/2] I181018 08:39:17.577638 82834 storage/queue.go:876 [replicate] purgatory is now empty
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSplitInsideRow (0.16s)
[08:42:54][Step 2/2] === RUN TestStoreRangeSplitIntents
[08:42:54][Step 2/2] I181018 08:39:17.597130 82951 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:17.673092 82951 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "m" [r2]
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSplitIntents (0.12s)
[08:42:54][Step 2/2] === RUN TestStoreRangeSplitAtRangeBounds
[08:42:54][Step 2/2] I181018 08:39:17.713981 83229 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:17.784861 83229 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r2]
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSplitAtRangeBounds (0.11s)
[08:42:54][Step 2/2] === RUN TestStoreRangeSplitConcurrent
[08:42:54][Step 2/2] I181018 08:39:17.827197 83444 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:17.904085 83555 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r3]
[08:42:54][Step 2/2] I181018 08:39:17.904594 82946 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r8]
[08:42:54][Step 2/2] I181018 08:39:17.904666 82943 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r7]
[08:42:54][Step 2/2] I181018 08:39:17.904332 83556 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r4]
[08:42:54][Step 2/2] I181018 08:39:17.904503 82945 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r6]
[08:42:54][Step 2/2] I181018 08:39:17.904375 82944 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r5]
[08:42:54][Step 2/2] I181018 08:39:17.904339 83560 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r9]
[08:42:54][Step 2/2] I181018 08:39:17.904867 83558 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r10]
[08:42:54][Step 2/2] I181018 08:39:17.904648 83557 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r2]
[08:42:54][Step 2/2] I181018 08:39:17.913263 83559 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "a" [r11]
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSplitConcurrent (0.28s)
[08:42:54][Step 2/2] === RUN TestStoreRangeSplitIdempotency
[08:42:54][Step 2/2] I181018 08:39:18.117345 83550 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:18.197878 83550 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key "m" [r2]
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSplitIdempotency (0.14s)
[08:42:54][Step 2/2] === RUN TestStoreRangeSplitStats
[08:42:54][Step 2/2] I181018 08:39:18.249758 83414 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:18.337538 83414 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key /Table/50 [r2]
[08:42:54][Step 2/2] I181018 08:39:18.718943 83414 storage/replica_command.go:300 [s1,r2/1:/{Table/50-Max}] initiating a split of this range at key /Table/50/"Z" [r3]
[08:42:54][Step 2/2] I181018 08:39:18.758602 83414 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:54][Step 2/2] 1 [async] kv.TxnCoordSender: heartbeat loop
[08:42:54][Step 2/2] I181018 08:39:18.758927 83419 storage/queue.go:876 [replicate] purgatory is now empty
[08:42:54][Step 2/2] I181018 08:39:18.759087 83414 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:54][Step 2/2] W181018 08:39:18.759262 83655 storage/intent_resolver.go:745 [s1] failed to cleanup transaction intents: failed to resolve intents: node unavailable; try another peer
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSplitStats (0.54s)
[08:42:54][Step 2/2] === RUN TestStoreEmptyRangeSnapshotSize
[08:42:54][Step 2/2] I181018 08:39:18.864935 83669 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41993" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:18.924737 83669 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:18.925643 83669 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:40407" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:18.927570 83971 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:41993
[08:42:54][Step 2/2] I181018 08:39:18.948070 83669 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key /Table/50 [r2]
[08:42:54][Step 2/2] I181018 08:39:18.973780 83669 storage/store_snapshot.go:621 [s1,r2/1:/{Table/50-Max}] sending preemptive snapshot 01e94f2e at applied index 11
[08:42:54][Step 2/2] I181018 08:39:18.974968 83669 storage/store_snapshot.go:664 [s1,r2/1:/{Table/50-Max}] streamed snapshot to (n2,s2):?: kv pairs: 6, log entries: 1, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:18.976940 83762 storage/replica_raftstorage.go:804 [s2,r2/?:{-}] applying preemptive snapshot at index 11 (id=01e94f2e, encoded size=284, 1 rocksdb batches, 1 log entries)
[08:42:54][Step 2/2] I181018 08:39:18.979029 83762 storage/replica_raftstorage.go:810 [s2,r2/?:/{Table/50-Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:18.985110 83669 storage/replica_command.go:816 [s1,r2/1:/{Table/50-Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/{Table/50-Max} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:18.997340 83669 storage/replica.go:3884 [s1,r2/1:/{Table/50-Max},txn=fa53759c] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] --- PASS: TestStoreEmptyRangeSnapshotSize (0.29s)
[08:42:54][Step 2/2] === RUN TestStoreRangeSplitStatsWithMerges
[08:42:54][Step 2/2] I181018 08:39:19.079151 83818 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:19.151300 83818 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key /Table/50 [r2]
[08:42:54][Step 2/2] I181018 08:39:19.904705 83818 storage/replica_command.go:300 [s1,r2/1:/{Table/50-Max}] initiating a split of this range at key /Table/50/360000000000000 [r3]
[08:42:54][Step 2/2] I181018 08:39:19.943887 83818 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 storage.replicate: purgatory processing replica
[08:42:54][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:54][Step 2/2] 1 [async] kv.TxnCoordSender: heartbeat loop
[08:42:54][Step 2/2] I181018 08:39:19.944308 83818 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:54][Step 2/2] W181018 08:39:19.944710 84117 storage/intent_resolver.go:745 [s1] failed to cleanup transaction intents: failed to resolve intents: result is ambiguous (server shutdown)
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSplitStatsWithMerges (0.89s)
[08:42:54][Step 2/2] === RUN TestStoreZoneUpdateAndRangeSplit
[08:42:54][Step 2/2] I181018 08:39:19.981524 84147 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:20.045837 84133 storage/replica_command.go:300 [split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
[08:42:54][Step 2/2] I181018 08:39:20.066279 84245 storage/replica_command.go:300 [split,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
[08:42:54][Step 2/2] I181018 08:39:20.096465 84148 storage/replica_command.go:300 [split,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
[08:42:54][Step 2/2] W181018 08:39:20.126414 83997 storage/intent_resolver.go:675 [s1] failed to push during intent resolution: failed to push "split" id=c0e48b6e key=/Local/Range/System/NodeLiveness/RangeDescriptor rw=true pri=0.01722272 iso=SERIALIZABLE stat=PENDING epo=0 ts=1539851960.096665841,0 orig=1539851960.096665841,0 max=1539851960.096665842,0 wto=false rop=false seq=1
[08:42:54][Step 2/2] I181018 08:39:20.140154 84260 storage/replica_command.go:300 [split,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
[08:42:54][Step 2/2] I181018 08:39:20.185938 84251 storage/replica_command.go:300 [split,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
[08:42:54][Step 2/2] I181018 08:39:20.232354 84252 storage/replica_command.go:300 [split,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
[08:42:54][Step 2/2] I181018 08:39:20.268430 84256 storage/replica_command.go:300 [split,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/50 [r8]
[08:42:54][Step 2/2] E181018 08:39:21.036190 84291 storage/consistency_queue.go:128 [consistencyChecker,s1,r1/1:/{Min-System/}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:21.036646 84291 storage/queue.go:791 [consistencyChecker,s1,r1/1:/{Min-System/}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] I181018 08:39:21.471603 84294 storage/replica_command.go:300 [split,s1,r8/1:/{Table/50-Max}] initiating a split of this range at key /Table/50/"YGeKRrBkIvhWcAfBEqNsZOobLECefBjaLPktOWEXqxzheZldhUpSfwLhTWhqjIgHywOaYtgpNEbeIJKpvjEKOqLMZHhfjqoOAqnO" [r9]
[08:42:54][Step 2/2] I181018 08:39:21.521255 84147 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:54][Step 2/2] 1 [async] storage.split: processing replica
[08:42:54][Step 2/2] 1 [async] storage.replicate: processing replica
[08:42:54][Step 2/2] I181018 08:39:21.521840 84147 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:54][Step 2/2] 1 [async] storage.replicate: processing replica
[08:42:54][Step 2/2] I181018 08:39:21.523719 84247 storage/queue.go:876 [replicate] purgatory is now empty
[08:42:54][Step 2/2] --- PASS: TestStoreZoneUpdateAndRangeSplit (1.59s)
[08:42:54][Step 2/2] === RUN TestStoreRangeSplitWithMaxBytesUpdate
[08:42:54][Step 2/2] I181018 08:39:21.560566 84120 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:21.630185 84352 storage/replica_command.go:300 [split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
[08:42:54][Step 2/2] I181018 08:39:21.654486 84303 storage/replica_command.go:300 [split,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
[08:42:54][Step 2/2] I181018 08:39:21.685683 84405 storage/replica_command.go:300 [split,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
[08:42:54][Step 2/2] I181018 08:39:21.727218 84282 storage/replica_command.go:300 [split,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
[08:42:54][Step 2/2] W181018 08:39:21.760709 84409 storage/intent_resolver.go:675 [s1] failed to push during intent resolution: failed to push "split" id=a5b0ef46 key=/Local/Range/System/NodeLivenessMax/RangeDescriptor rw=true pri=0.02659769 iso=SERIALIZABLE stat=PENDING epo=0 ts=1539851961.727700962,0 orig=1539851961.727700962,0 max=1539851961.727700963,0 wto=false rop=false seq=1
[08:42:54][Step 2/2] I181018 08:39:21.776392 84410 storage/replica_command.go:300 [split,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
[08:42:54][Step 2/2] I181018 08:39:21.823030 84286 storage/replica_command.go:300 [split,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
[08:42:54][Step 2/2] I181018 08:39:21.896332 84411 storage/replica_command.go:300 [split,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/50 [r8]
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSplitWithMaxBytesUpdate (0.63s)
[08:42:54][Step 2/2] === RUN TestStoreRangeSplitBackpressureWrites
[08:42:54][Step 2/2] === RUN TestStoreRangeSplitBackpressureWrites/splitOngoing=true,splitErr=false
[08:42:54][Step 2/2] I181018 08:39:22.185671 84012 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:22.257790 84488 storage/replica_command.go:300 [split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
[08:42:54][Step 2/2] I181018 08:39:22.278567 84415 storage/replica_command.go:300 [split,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
[08:42:54][Step 2/2] I181018 08:39:22.292371 84012 storage/client_split_test.go:1142 had 2 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.293896 84012 storage/client_split_test.go:1142 had 2 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.295442 84012 storage/client_split_test.go:1142 had 2 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.318430 84525 storage/replica_command.go:300 [split,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
[08:42:54][Step 2/2] I181018 08:39:22.332852 84012 storage/client_split_test.go:1142 had 3 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.338790 84012 storage/client_split_test.go:1142 had 3 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.365420 84563 storage/replica_command.go:300 [split,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
[08:42:54][Step 2/2] I181018 08:39:22.383051 84012 storage/client_split_test.go:1142 had 4 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.402351 84579 storage/replica_command.go:300 [split,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
[08:42:54][Step 2/2] I181018 08:39:22.417113 84012 storage/client_split_test.go:1142 had 5 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.421821 84012 storage/client_split_test.go:1142 had 5 ranges at startup, expected 20
[08:42:54][Step 2/2] W181018 08:39:22.433177 84570 storage/intent_resolver.go:675 [s1] failed to push during intent resolution: failed to push "split" id=60e1ae61 key=/Local/Range/System/tsd/RangeDescriptor rw=true pri=0.01198794 iso=SERIALIZABLE stat=PENDING epo=0 ts=1539851962.420818266,1 orig=1539851962.402547533,0 max=1539851962.402547534,0 wto=false rop=false seq=1
[08:42:54][Step 2/2] I181018 08:39:22.448741 84548 storage/replica_command.go:300 [split,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
[08:42:54][Step 2/2] I181018 08:39:22.459994 84012 storage/client_split_test.go:1142 had 6 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.487037 84551 storage/replica_command.go:300 [split,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
[08:42:54][Step 2/2] I181018 08:39:22.495782 84012 storage/client_split_test.go:1142 had 7 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.502293 84012 storage/client_split_test.go:1142 had 7 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.522892 84018 storage/replica_command.go:300 [split,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
[08:42:54][Step 2/2] I181018 08:39:22.533171 84012 storage/client_split_test.go:1142 had 8 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.538835 84012 storage/client_split_test.go:1142 had 8 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.562396 84596 storage/replica_command.go:300 [split,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
[08:42:54][Step 2/2] I181018 08:39:22.577790 84012 storage/client_split_test.go:1142 had 9 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.600755 84542 storage/replica_command.go:300 [split,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
[08:42:54][Step 2/2] I181018 08:39:22.612977 84012 storage/client_split_test.go:1142 had 10 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.642877 84429 storage/replica_command.go:300 [split,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
[08:42:54][Step 2/2] I181018 08:39:22.653031 84012 storage/client_split_test.go:1142 had 11 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.663347 84012 storage/client_split_test.go:1142 had 11 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.699910 84431 storage/replica_command.go:300 [split,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
[08:42:54][Step 2/2] I181018 08:39:22.718517 84012 storage/client_split_test.go:1142 had 12 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.720689 84012 storage/client_split_test.go:1142 had 12 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.734750 84012 storage/client_split_test.go:1142 had 12 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.739239 84012 storage/client_split_test.go:1142 had 12 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.788668 84561 storage/replica_command.go:300 [split,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
[08:42:54][Step 2/2] I181018 08:39:22.807856 84012 storage/client_split_test.go:1142 had 13 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.830828 84605 storage/replica_command.go:300 [split,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
[08:42:54][Step 2/2] I181018 08:39:22.839839 84012 storage/client_split_test.go:1142 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.850236 84012 storage/client_split_test.go:1142 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.889100 84643 storage/replica_command.go:300 [split,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
[08:42:54][Step 2/2] I181018 08:39:22.897375 84012 storage/client_split_test.go:1142 had 15 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.933080 84647 storage/replica_command.go:300 [split,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
[08:42:54][Step 2/2] I181018 08:39:22.945431 84012 storage/client_split_test.go:1142 had 16 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:22.973719 84693 storage/replica_command.go:300 [split,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
[08:42:54][Step 2/2] I181018 08:39:22.987232 84012 storage/client_split_test.go:1142 had 17 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:23.019637 84695 storage/replica_command.go:300 [split,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
[08:42:54][Step 2/2] I181018 08:39:23.077952 84609 storage/replica_command.go:300 [split,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
[08:42:54][Step 2/2] I181018 08:39:23.083233 84012 storage/client_split_test.go:1142 had 19 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:23.221538 84012 storage/replica_command.go:300 [s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
[08:42:54][Step 2/2] E181018 08:39:23.251420 84610 storage/consistency_queue.go:128 [consistencyChecker,s1,r1/1:/{Min-System/}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:23.252270 84610 storage/queue.go:791 [consistencyChecker,s1,r1/1:/{Min-System/}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:24.247320 84623 storage/consistency_queue.go:128 [consistencyChecker,s1,r6/1:/{System/tse-Table/System…}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:24.248044 84623 storage/queue.go:791 [consistencyChecker,s1,r6/1:/{System/tse-Table/System…}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] I181018 08:39:25.175565 84713 storage/replica_command.go:300 [split,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/50/"MJoCeJamymgBLZmoqFENrdUGORsKxiLbkDPKcphWdsymNQCJotMVPNjFRDVvoNTVfuyLxxmgVRmraONHxfIZpXaFJXoUacsRzUdk" [r22]
[08:42:54][Step 2/2] W181018 08:39:25.178877 84715 storage/replica_backpressure.go:135 [s1,r21/1:/{Table/50-Max}] applying backpressure to limit range growth on batch Put [/Table/50,/Min)
[08:42:54][Step 2/2] E181018 08:39:25.246920 84724 storage/consistency_queue.go:128 [consistencyChecker,s1,r10/1:/Table/1{3-4}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:25.247382 84724 storage/queue.go:791 [consistencyChecker,s1,r10/1:/Table/1{3-4}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] I181018 08:39:25.328010 84723 storage/replica_command.go:300 [split,s1,r22/1:/{Table/50/"MJ…-Max}] initiating a split of this range at key /Table/50/"YwVPOPnfuMhvIkdtHUvJZGIDxTcUTKjInyTAqLJXzqTjrTdbkiALgTfquMpGkOpfOcpJbTpiUHQNkbAxujnvgnOgpiZqJlMeFwTB" [r23]
[08:42:54][Step 2/2] I181018 08:39:25.336505 84012 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 storage.replicate: purgatory processing replica
[08:42:54][Step 2/2] 1 [async] kv.TxnCoordSender: heartbeat loop
[08:42:54][Step 2/2] 1 [async] force split
[08:42:54][Step 2/2] I181018 08:39:25.337574 84012 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 storage.replicate: purgatory processing replica
[08:42:54][Step 2/2] 1 [async] force split
[08:42:54][Step 2/2] I181018 08:39:25.337910 84012 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] force split
[08:42:54][Step 2/2] W181018 08:39:25.340842 84723 internal/client/txn.go:532 [split,s1,r22/1:/{Table/50/"MJ…-Max}] failure aborting transaction: node unavailable; try another peer; abort caused by: kv/txn_interceptor_heartbeat.go:405: node already quiescing
[08:42:54][Step 2/2] E181018 08:39:25.341443 84723 storage/queue.go:791 [split,s1,r22/1:/{Table/50/"MJ…-Max}] split at key /Table/50/"YwVPOPnfuMhvIkdtHUvJZGIDxTcUTKjInyTAqLJXzqTjrTdbkiALgTfquMpGkOpfOcpJbTpiUHQNkbAxujnvgnOgpiZqJlMeFwTB" failed: kv/txn_interceptor_heartbeat.go:405: node already quiescing
[08:42:54][Step 2/2] I181018 08:39:25.342431 84723 storage/replica_command.go:300 [split,s1,r22/1:/{Table/50/"MJ…-Max}] initiating a split of this range at key /Table/50/"YwVPOPnfuMhvIkdtHUvJZGIDxTcUTKjInyTAqLJXzqTjrTdbkiALgTfquMpGkOpfOcpJbTpiUHQNkbAxujnvgnOgpiZqJlMeFwTB" [r24]
[08:42:54][Step 2/2] W181018 08:39:25.343332 84519 storage/idalloc/id_alloc.go:114 [s1] node unavailable; try another peer
[08:42:54][Step 2/2] W181018 08:39:25.344152 84723 internal/client/txn.go:532 [split,s1,r22/1:/{Table/50/"MJ…-Max}] failure aborting transaction: node unavailable; try another peer; abort caused by: node unavailable; try another peer
[08:42:54][Step 2/2] E181018 08:39:25.344566 84723 storage/queue.go:791 [split,s1,r22/1:/{Table/50/"MJ…-Max}] split at key /Table/50/"YwVPOPnfuMhvIkdtHUvJZGIDxTcUTKjInyTAqLJXzqTjrTdbkiALgTfquMpGkOpfOcpJbTpiUHQNkbAxujnvgnOgpiZqJlMeFwTB" failed: node unavailable; try another peer
[08:42:54][Step 2/2] === RUN TestStoreRangeSplitBackpressureWrites/splitOngoing=true,splitErr=true
[08:42:54][Step 2/2] I181018 08:39:25.361506 84706 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:25.436967 84853 storage/replica_command.go:300 [split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
[08:42:54][Step 2/2] I181018 08:39:25.458378 84731 storage/replica_command.go:300 [split,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
[08:42:54][Step 2/2] I181018 08:39:25.492064 84633 storage/replica_command.go:300 [split,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
[08:42:54][Step 2/2] I181018 08:39:25.508410 84706 storage/client_split_test.go:1142 had 3 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.510009 84706 storage/client_split_test.go:1142 had 3 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.511449 84706 storage/client_split_test.go:1142 had 3 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.512885 84706 storage/client_split_test.go:1142 had 3 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.514258 84706 storage/client_split_test.go:1142 had 3 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.515534 84706 storage/client_split_test.go:1142 had 3 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.516963 84706 storage/client_split_test.go:1142 had 3 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.540311 84868 storage/replica_command.go:300 [split,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
[08:42:54][Step 2/2] I181018 08:39:25.549702 84706 storage/client_split_test.go:1142 had 4 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.557877 84706 storage/client_split_test.go:1142 had 4 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.588211 84883 storage/replica_command.go:300 [split,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
[08:42:54][Step 2/2] I181018 08:39:25.621717 84706 storage/client_split_test.go:1142 had 5 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.627457 84706 storage/client_split_test.go:1142 had 5 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.629088 84706 storage/client_split_test.go:1142 had 5 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.661083 84876 storage/replica_command.go:300 [split,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
[08:42:54][Step 2/2] I181018 08:39:25.666444 84706 storage/client_split_test.go:1142 had 6 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.671934 84706 storage/client_split_test.go:1142 had 6 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.697663 84722 storage/replica_command.go:300 [split,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
[08:42:54][Step 2/2] I181018 08:39:25.702805 84706 storage/client_split_test.go:1142 had 7 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.708290 84706 storage/client_split_test.go:1142 had 7 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.730697 84900 storage/replica_command.go:300 [split,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
[08:42:54][Step 2/2] I181018 08:39:25.741737 84706 storage/client_split_test.go:1142 had 8 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.743485 84706 storage/client_split_test.go:1142 had 8 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.771096 84657 storage/replica_command.go:300 [split,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
[08:42:54][Step 2/2] I181018 08:39:25.796937 84706 storage/client_split_test.go:1142 had 9 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.809763 84893 storage/replica_command.go:300 [split,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
[08:42:54][Step 2/2] I181018 08:39:25.822773 84706 storage/client_split_test.go:1142 had 10 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.867094 84903 storage/replica_command.go:300 [split,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
[08:42:54][Step 2/2] I181018 08:39:25.900887 84706 storage/client_split_test.go:1142 had 11 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.916992 84894 storage/replica_command.go:300 [split,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
[08:42:54][Step 2/2] I181018 08:39:25.924849 84706 storage/client_split_test.go:1142 had 12 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.933406 84706 storage/client_split_test.go:1142 had 12 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.954906 84979 storage/replica_command.go:300 [split,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
[08:42:54][Step 2/2] I181018 08:39:25.964137 84706 storage/client_split_test.go:1142 had 13 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:25.994549 84939 storage/replica_command.go:300 [split,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
[08:42:54][Step 2/2] I181018 08:39:26.002957 84706 storage/client_split_test.go:1142 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:26.036784 84968 storage/replica_command.go:300 [split,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
[08:42:54][Step 2/2] I181018 08:39:26.044102 84706 storage/client_split_test.go:1142 had 15 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:26.072679 84942 storage/replica_command.go:300 [split,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
[08:42:54][Step 2/2] I181018 08:39:26.082982 84706 storage/client_split_test.go:1142 had 16 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:26.111895 84969 storage/replica_command.go:300 [split,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
[08:42:54][Step 2/2] I181018 08:39:26.149960 84983 storage/replica_command.go:300 [split,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
[08:42:54][Step 2/2] I181018 08:39:26.156913 84706 storage/client_split_test.go:1142 had 18 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:26.184574 84972 storage/replica_command.go:300 [split,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
[08:42:54][Step 2/2] I181018 08:39:26.293681 84706 storage/replica_command.go:300 [s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
[08:42:54][Step 2/2] E181018 08:39:26.431150 84959 storage/consistency_queue.go:128 [consistencyChecker,s1,r1/1:/{Min-System/}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:26.431705 84959 storage/queue.go:791 [consistencyChecker,s1,r1/1:/{Min-System/}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:27.432944 85043 storage/consistency_queue.go:128 [consistencyChecker,s1,r9/1:/Table/1{2-3}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:27.433521 85043 storage/queue.go:791 [consistencyChecker,s1,r9/1:/Table/1{2-3}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] I181018 08:39:28.424915 85035 storage/replica_command.go:300 [split,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/50/"MJoCeJamymgBLZmoqFENrdUGORsKxiLbkDPKcphWdsymNQCJotMVPNjFRDVvoNTVfuyLxxmgVRmraONHxfIZpXaFJXoUacsRzUdk" [r22]
[08:42:54][Step 2/2] W181018 08:39:28.431041 85037 storage/replica_backpressure.go:135 [s1,r21/1:/{Table/50-Max}] applying backpressure to limit range growth on batch Put [/Table/50,/Min)
[08:42:54][Step 2/2] E181018 08:39:28.449065 85025 storage/consistency_queue.go:128 [consistencyChecker,s1,r16/1:/Table/{19-20}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:28.449492 85025 storage/queue.go:791 [consistencyChecker,s1,r16/1:/Table/{19-20}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:28.535397 85035 storage/queue.go:791 [split,s1,r21/1:/{Table/50-Max}] split at key /Table/50/"MJoCeJamymgBLZmoqFENrdUGORsKxiLbkDPKcphWdsymNQCJotMVPNjFRDVvoNTVfuyLxxmgVRmraONHxfIZpXaFJXoUacsRzUdk" failed: storage/client_split_test.go:1128: boom
[08:42:54][Step 2/2] === RUN TestStoreRangeSplitBackpressureWrites/splitOngoing=false,splitErr=false
[08:42:54][Step 2/2] I181018 08:39:28.553423 85045 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:28.625452 85143 storage/replica_command.go:300 [split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
[08:42:54][Step 2/2] I181018 08:39:28.656146 85187 storage/replica_command.go:300 [split,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
[08:42:54][Step 2/2] I181018 08:39:28.664714 85045 storage/client_split_test.go:1142 had 2 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:28.666289 85045 storage/client_split_test.go:1142 had 2 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:28.686795 85165 storage/replica_command.go:300 [split,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
[08:42:54][Step 2/2] I181018 08:39:28.699192 85045 storage/client_split_test.go:1142 had 3 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:28.724285 85168 storage/replica_command.go:300 [split,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
[08:42:54][Step 2/2] I181018 08:39:28.754189 85045 storage/client_split_test.go:1142 had 4 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:28.758226 85045 storage/client_split_test.go:1142 had 4 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:28.759729 85045 storage/client_split_test.go:1142 had 4 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:28.788748 85147 storage/replica_command.go:300 [split,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
[08:42:54][Step 2/2] I181018 08:39:28.804308 85045 storage/client_split_test.go:1142 had 5 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:28.806007 85045 storage/client_split_test.go:1142 had 5 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:28.807303 85045 storage/client_split_test.go:1142 had 5 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:28.808307 85045 storage/client_split_test.go:1142 had 5 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:28.830694 85193 storage/replica_command.go:300 [split,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
[08:42:54][Step 2/2] I181018 08:39:28.842838 85045 storage/client_split_test.go:1142 had 6 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:28.866250 85067 storage/replica_command.go:300 [split,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
[08:42:54][Step 2/2] I181018 08:39:28.873697 85045 storage/client_split_test.go:1142 had 7 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:28.900633 84989 storage/replica_command.go:300 [split,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
[08:42:54][Step 2/2] I181018 08:39:28.906706 85045 storage/client_split_test.go:1142 had 8 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:28.912624 85045 storage/client_split_test.go:1142 had 8 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:28.937370 85206 storage/replica_command.go:300 [split,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
[08:42:54][Step 2/2] I181018 08:39:28.945514 85045 storage/client_split_test.go:1142 had 9 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:28.986547 85208 storage/replica_command.go:300 [split,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
[08:42:54][Step 2/2] I181018 08:39:28.991703 85045 storage/client_split_test.go:1142 had 10 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:29.017563 85197 storage/replica_command.go:300 [split,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
[08:42:54][Step 2/2] I181018 08:39:29.026608 85045 storage/client_split_test.go:1142 had 11 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:29.053074 84992 storage/replica_command.go:300 [split,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
[08:42:54][Step 2/2] I181018 08:39:29.065104 85045 storage/client_split_test.go:1142 had 12 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:29.111099 85069 storage/replica_command.go:300 [split,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
[08:42:54][Step 2/2] I181018 08:39:29.121048 85045 storage/client_split_test.go:1142 had 13 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:29.123898 85045 storage/client_split_test.go:1142 had 13 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:29.126609 85045 storage/client_split_test.go:1142 had 13 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:29.158887 85242 storage/replica_command.go:300 [split,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
[08:42:54][Step 2/2] I181018 08:39:29.164905 85045 storage/client_split_test.go:1142 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:29.171861 85045 storage/client_split_test.go:1142 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:29.203368 85254 storage/replica_command.go:300 [split,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
[08:42:54][Step 2/2] I181018 08:39:29.211659 85045 storage/client_split_test.go:1142 had 15 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:29.237469 85214 storage/replica_command.go:300 [split,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
[08:42:54][Step 2/2] I181018 08:39:29.250852 85045 storage/client_split_test.go:1142 had 16 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:29.285238 85284 storage/replica_command.go:300 [split,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
[08:42:54][Step 2/2] I181018 08:39:29.292544 85045 storage/client_split_test.go:1142 had 17 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:29.329080 85315 storage/replica_command.go:300 [split,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
[08:42:54][Step 2/2] I181018 08:39:29.337017 85045 storage/client_split_test.go:1142 had 18 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:29.378680 85318 storage/replica_command.go:300 [split,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
[08:42:54][Step 2/2] I181018 08:39:29.428476 85045 storage/replica_command.go:300 [s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
[08:42:54][Step 2/2] E181018 08:39:29.610244 85324 storage/consistency_queue.go:128 [consistencyChecker,s1,r1/1:/{Min-System/}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:29.610810 85324 storage/queue.go:791 [consistencyChecker,s1,r1/1:/{Min-System/}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:30.631829 85260 storage/consistency_queue.go:128 [consistencyChecker,s1,r21/1:/{Table/50-Max}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:30.632393 85260 storage/queue.go:791 [consistencyChecker,s1,r21/1:/{Table/50-Max}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] W181018 08:39:31.477308 85290 storage/replica_backpressure.go:135 [s1,r21/1:/{Table/50-Max}] applying backpressure to limit range growth on batch Put [/Table/50,/Min)
[08:42:54][Step 2/2] === RUN TestStoreRangeSplitBackpressureWrites/splitImpossible=true
[08:42:54][Step 2/2] I181018 08:39:31.506399 85363 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:31.578380 85363 storage/client_split_test.go:1142 had 1 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.579643 85363 storage/client_split_test.go:1142 had 1 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.580946 85363 storage/client_split_test.go:1142 had 1 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.581977 85363 storage/client_split_test.go:1142 had 1 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.582887 85363 storage/client_split_test.go:1142 had 1 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.583510 85457 storage/replica_command.go:300 [split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
[08:42:54][Step 2/2] I181018 08:39:31.584198 85363 storage/client_split_test.go:1142 had 1 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.585577 85363 storage/client_split_test.go:1142 had 1 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.586943 85363 storage/client_split_test.go:1142 had 1 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.610402 85475 storage/replica_command.go:300 [split,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
[08:42:54][Step 2/2] I181018 08:39:31.611492 85363 storage/client_split_test.go:1142 had 2 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.613879 85363 storage/client_split_test.go:1142 had 2 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.615180 85363 storage/client_split_test.go:1142 had 2 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.616731 85363 storage/client_split_test.go:1142 had 2 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.617827 85363 storage/client_split_test.go:1142 had 2 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.619005 85363 storage/client_split_test.go:1142 had 2 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.620295 85363 storage/client_split_test.go:1142 had 2 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.621485 85363 storage/client_split_test.go:1142 had 2 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.668639 85334 storage/replica_command.go:300 [split,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
[08:42:54][Step 2/2] I181018 08:39:31.691013 85363 storage/client_split_test.go:1142 had 3 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.726722 85480 storage/replica_command.go:300 [split,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
[08:42:54][Step 2/2] I181018 08:39:31.734923 85363 storage/client_split_test.go:1142 had 4 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.745017 85363 storage/client_split_test.go:1142 had 4 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.746464 85363 storage/client_split_test.go:1142 had 4 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.748488 85363 storage/client_split_test.go:1142 had 4 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.751157 85363 storage/client_split_test.go:1142 had 4 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.783358 85463 storage/replica_command.go:300 [split,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
[08:42:54][Step 2/2] I181018 08:39:31.798184 85363 storage/client_split_test.go:1142 had 5 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.827761 85372 storage/replica_command.go:300 [split,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
[08:42:54][Step 2/2] I181018 08:39:31.845460 85363 storage/client_split_test.go:1142 had 6 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.884611 85309 storage/replica_command.go:300 [split,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
[08:42:54][Step 2/2] I181018 08:39:31.892386 85363 storage/client_split_test.go:1142 had 7 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.926022 85312 storage/replica_command.go:300 [split,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
[08:42:54][Step 2/2] I181018 08:39:31.935701 85363 storage/client_split_test.go:1142 had 8 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:31.970964 85489 storage/replica_command.go:300 [split,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
[08:42:54][Step 2/2] I181018 08:39:31.977686 85363 storage/client_split_test.go:1142 had 9 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:32.021833 85376 storage/replica_command.go:300 [split,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
[08:42:54][Step 2/2] I181018 08:39:32.067360 85425 storage/replica_command.go:300 [split,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
[08:42:54][Step 2/2] I181018 08:39:32.087468 85363 storage/client_split_test.go:1142 had 11 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:32.113677 85510 storage/replica_command.go:300 [split,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
[08:42:54][Step 2/2] I181018 08:39:32.171151 85314 storage/replica_command.go:300 [split,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
[08:42:54][Step 2/2] I181018 08:39:32.209671 85496 storage/replica_command.go:300 [split,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
[08:42:54][Step 2/2] I181018 08:39:32.223808 85363 storage/client_split_test.go:1142 had 14 ranges at startup, expected 20
[08:42:54][Step 2/2] I181018 08:39:32.244839 85497 storage/replica_command.go:300 [split,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
[08:42:54][Step 2/2] I181018 08:39:32.295551 85546 storage/replica_command.go:300 [split,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
[08:42:54][Step 2/2] I181018 08:39:32.343497 85566 storage/replica_command.go:300 [split,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
[08:42:54][Step 2/2] I181018 08:39:32.387733 85468 storage/replica_command.go:300 [split,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
[08:42:54][Step 2/2] I181018 08:39:32.444745 85573 storage/replica_command.go:300 [split,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
[08:42:54][Step 2/2] I181018 08:39:32.511262 85363 storage/replica_command.go:300 [s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
[08:42:54][Step 2/2] E181018 08:39:32.607558 85620 storage/consistency_queue.go:128 [consistencyChecker,s1,r1/1:/{Min-System/}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:32.608140 85620 storage/queue.go:791 [consistencyChecker,s1,r1/1:/{Min-System/}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:33.607428 85471 storage/consistency_queue.go:128 [consistencyChecker,s1,r14/1:/Table/1{7-8}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:33.608065 85471 storage/queue.go:791 [consistencyChecker,s1,r14/1:/Table/1{7-8}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:34.608943 85627 storage/consistency_queue.go:128 [consistencyChecker,s1,r11/1:/Table/1{4-5}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:34.609498 85627 storage/queue.go:791 [consistencyChecker,s1,r11/1:/Table/1{4-5}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] W181018 08:39:35.310010 85637 storage/replica_backpressure.go:135 [s1,r21/1:/{Table/50-Max}] applying backpressure to limit range growth on batch Put [/Table/50,/Min)
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSplitBackpressureWrites (13.15s)
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSplitBackpressureWrites/splitOngoing=true,splitErr=false (3.18s)
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSplitBackpressureWrites/splitOngoing=true,splitErr=true (3.19s)
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSplitBackpressureWrites/splitOngoing=false,splitErr=false (2.94s)
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSplitBackpressureWrites/splitImpossible=true (3.83s)
[08:42:54][Step 2/2] === RUN TestStoreRangeSystemSplits
[08:42:54][Step 2/2] I181018 08:39:35.335979 85594 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"" address_field:"" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:35.395256 85610 storage/replica_command.go:300 [split,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
[08:42:54][Step 2/2] I181018 08:39:35.423112 85547 storage/replica_command.go:300 [split,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
[08:42:54][Step 2/2] W181018 08:39:35.447723 85598 storage/intent_resolver.go:675 [s1] failed to push during intent resolution: failed to push "unnamed" id=7bef22d5 key=/Table/SystemConfigSpan/Start rw=true pri=0.00919336 iso=SERIALIZABLE stat=PENDING epo=0 ts=0.000000123,19 orig=0.000000123,19 max=0.000000123,51 wto=false rop=false seq=1
[08:42:54][Step 2/2] I181018 08:39:35.474308 85615 storage/replica_command.go:300 [split,s1,r3/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
[08:42:54][Step 2/2] I181018 08:39:35.521224 85520 storage/replica_command.go:300 [split,s1,r4/1:/{System/NodeL…-Max}] initiating a split of this range at key /System/tsd [r5]
[08:42:54][Step 2/2] I181018 08:39:35.562041 85618 storage/replica_command.go:300 [split,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
[08:42:54][Step 2/2] I181018 08:39:35.609046 85751 storage/replica_command.go:300 [split,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
[08:42:54][Step 2/2] I181018 08:39:35.640792 85553 storage/replica_command.go:300 [split,s1,r7/1:/{Table/System…-Max}] initiating a split of this range at key /Table/11 [r8]
[08:42:54][Step 2/2] I181018 08:39:35.687221 85811 storage/replica_command.go:300 [split,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
[08:42:54][Step 2/2] I181018 08:39:35.723589 85754 storage/replica_command.go:300 [split,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
[08:42:54][Step 2/2] I181018 08:39:35.764442 85815 storage/replica_command.go:300 [split,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
[08:42:54][Step 2/2] I181018 08:39:35.801593 85758 storage/replica_command.go:300 [split,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
[08:42:54][Step 2/2] I181018 08:39:35.839936 85770 storage/replica_command.go:300 [split,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
[08:42:54][Step 2/2] I181018 08:39:35.873290 85830 storage/replica_command.go:300 [split,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
[08:42:54][Step 2/2] I181018 08:39:35.906800 85831 storage/replica_command.go:300 [split,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
[08:42:54][Step 2/2] I181018 08:39:35.946278 85876 storage/replica_command.go:300 [split,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
[08:42:54][Step 2/2] I181018 08:39:35.986369 85825 storage/replica_command.go:300 [split,s1,r16/1:/{Table/19-Max}] initiating a split of this range at key /Table/20 [r17]
[08:42:54][Step 2/2] I181018 08:39:36.026235 85801 storage/replica_command.go:300 [split,s1,r17/1:/{Table/20-Max}] initiating a split of this range at key /Table/21 [r18]
[08:42:54][Step 2/2] I181018 08:39:36.069213 85804 storage/replica_command.go:300 [split,s1,r18/1:/{Table/21-Max}] initiating a split of this range at key /Table/22 [r19]
[08:42:54][Step 2/2] I181018 08:39:36.116194 85865 storage/replica_command.go:300 [split,s1,r19/1:/{Table/22-Max}] initiating a split of this range at key /Table/23 [r20]
[08:42:54][Step 2/2] I181018 08:39:36.153233 85891 storage/replica_command.go:300 [split,s1,r20/1:/{Table/23-Max}] initiating a split of this range at key /Table/50 [r21]
[08:42:54][Step 2/2] I181018 08:39:36.206133 85867 storage/replica_command.go:300 [split,s1,r21/1:/{Table/50-Max}] initiating a split of this range at key /Table/51 [r22]
[08:42:54][Step 2/2] I181018 08:39:36.247144 85851 storage/replica_command.go:300 [split,s1,r22/1:/{Table/51-Max}] initiating a split of this range at key /Table/52 [r23]
[08:42:54][Step 2/2] I181018 08:39:36.284505 85897 storage/replica_command.go:300 [split,s1,r23/1:/{Table/52-Max}] initiating a split of this range at key /Table/53 [r24]
[08:42:54][Step 2/2] I181018 08:39:36.323795 85923 storage/replica_command.go:300 [split,s1,r24/1:/{Table/53-Max}] initiating a split of this range at key /Table/54 [r25]
[08:42:54][Step 2/2] E181018 08:39:36.393828 85854 storage/consistency_queue.go:128 [consistencyChecker,s1,r1/1:/{Min-System/}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] E181018 08:39:36.394404 85854 storage/queue.go:791 [consistencyChecker,s1,r1/1:/{Min-System/}] computing own checksum: could not dial node ID 1: no node dialer configured
[08:42:54][Step 2/2] I181018 08:39:36.446262 85929 storage/replica_command.go:300 [split,s1,r25/1:/{Table/54-Max}] initiating a split of this range at key /Table/57 [r26]
[08:42:54][Step 2/2] I181018 08:39:36.487531 85594 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 storage.replicate: purgatory processing replica
[08:42:54][Step 2/2] 1 storage.intentResolver: processing intents
[08:42:54][Step 2/2] 1 [async] storage.split: processing replica
[08:42:54][Step 2/2] I181018 08:39:36.489087 85594 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 storage.replicate: purgatory processing replica
[08:42:54][Step 2/2] 1 [async] storage.split: processing replica
[08:42:54][Step 2/2] I181018 08:39:36.490019 85594 util/stop/stopper.go:537 quiescing; tasks left:
[08:42:54][Step 2/2] 1 [async] storage.split: processing replica
[08:42:54][Step 2/2] --- PASS: TestStoreRangeSystemSplits (1.18s)
[08:42:54][Step 2/2] === RUN TestSplitSnapshotRace_SplitWins
[08:42:54][Step 2/2] I181018 08:39:36.574129 85787 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:37835" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:36.617869 85787 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:36.619048 85787 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45275" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:36.620687 86044 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:37835
[08:42:54][Step 2/2] W181018 08:39:36.674333 85787 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:36.675569 85787 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:42883" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:36.677621 86161 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:37835
[08:42:54][Step 2/2] W181018 08:39:36.733166 85787 gossip/gossip.go:1496 [n4] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:36.734161 85787 gossip/gossip.go:393 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:43321" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:36.737270 86297 gossip/client.go:129 [n4] started gossip client to 127.0.0.1:37835
[08:42:54][Step 2/2] W181018 08:39:36.784277 85787 gossip/gossip.go:1496 [n5] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:36.785334 85787 gossip/gossip.go:393 [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:39427" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:36.787291 86281 gossip/client.go:129 [n5] started gossip client to 127.0.0.1:37835
[08:42:54][Step 2/2] I181018 08:39:36.789140 86517 gossip/server.go:282 [n1] refusing gossip from n5 (max 3 conns); forwarding to n2 ({tcp 127.0.0.1:45275})
[08:42:54][Step 2/2] I181018 08:39:36.792129 86281 gossip/client.go:134 [n5] closing client to n1 (127.0.0.1:37835): received forward from n1 to 2 (127.0.0.1:45275)
[08:42:54][Step 2/2] I181018 08:39:36.794517 86502 gossip/gossip.go:1510 [n5] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:39:36.795956 86478 gossip/client.go:129 [n5] started gossip client to 127.0.0.1:45275
[08:42:54][Step 2/2] W181018 08:39:36.842185 85787 gossip/gossip.go:1496 [n6] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:36.843515 85787 gossip/gossip.go:393 [n6] NodeDescriptor set to node_id:6 address:<network_field:"tcp" address_field:"127.0.0.1:35441" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:36.845261 86613 gossip/client.go:129 [n6] started gossip client to 127.0.0.1:37835
[08:42:54][Step 2/2] I181018 08:39:36.847373 86530 gossip/server.go:282 [n1] refusing gossip from n6 (max 3 conns); forwarding to n2 ({tcp 127.0.0.1:45275})
[08:42:54][Step 2/2] I181018 08:39:36.850516 86613 gossip/client.go:134 [n6] closing client to n1 (127.0.0.1:37835): received forward from n1 to 2 (127.0.0.1:45275)
[08:42:54][Step 2/2] I181018 08:39:36.851746 86529 gossip/gossip.go:1510 [n6] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:39:36.854108 86290 gossip/client.go:129 [n6] started gossip client to 127.0.0.1:45275
[08:42:54][Step 2/2] I181018 08:39:36.878122 85787 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/Max [r2]
[08:42:54][Step 2/2] I181018 08:39:36.909002 85787 storage/store_snapshot.go:621 [s1,r2/1:/{System/Max-Max}] sending preemptive snapshot 3d5599bc at applied index 11
[08:42:54][Step 2/2] I181018 08:39:36.910427 85787 storage/store_snapshot.go:664 [s1,r2/1:/{System/Max-Max}] streamed snapshot to (n2,s2):?: kv pairs: 44, log entries: 1, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:36.912469 86504 storage/replica_raftstorage.go:804 [s2,r2/?:{-}] applying preemptive snapshot at index 11 (id=3d5599bc, encoded size=7511, 1 rocksdb batches, 1 log entries)
[08:42:54][Step 2/2] I181018 08:39:36.914547 86504 storage/replica_raftstorage.go:810 [s2,r2/?:/{System/Max-Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:36.917437 85787 storage/replica_command.go:816 [s1,r2/1:/{System/Max-Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/{System/Max-Max} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:36.928523 85787 storage/replica.go:3884 [s1,r2/1:/{System/Max-Max},txn=57d0fcc0] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] W181018 08:39:36.932449 85787 storage/replica.go:3339 [s1,r2/1:/{System/Max-Max},txn=57d0fcc0] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:36.939221 85787 storage/store_snapshot.go:621 [s1,r2/1:/{System/Max-Max}] sending preemptive snapshot c2b213c6 at applied index 14
[08:42:54][Step 2/2] I181018 08:39:36.943890 85787 storage/store_snapshot.go:664 [s1,r2/1:/{System/Max-Max}] streamed snapshot to (n3,s3):?: kv pairs: 46, log entries: 4, rate-limit: 2.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:39:36.946072 86647 storage/replica_raftstorage.go:804 [s3,r2/?:{-}] applying preemptive snapshot at index 14 (id=c2b213c6, encoded size=8404, 1 rocksdb batches, 4 log entries)
[08:42:54][Step 2/2] I181018 08:39:36.951683 86647 storage/replica_raftstorage.go:810 [s3,r2/?:/{System/Max-Max}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:36.955538 85787 storage/replica_command.go:816 [s1,r2/1:/{System/Max-Max}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r2:/{System/Max-Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] I181018 08:39:36.979022 85787 storage/replica.go:3884 [s1,r2/1:/{System/Max-Max},txn=1922c77e] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] W181018 08:39:36.986046 85787 storage/replica.go:3339 [s1,r2/1:/{System/Max-Max},txn=1922c77e] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:36.993443 85787 storage/store_snapshot.go:621 [s1,r2/1:/{System/Max-Max}] sending preemptive snapshot 4e71d8e3 at applied index 16
[08:42:54][Step 2/2] I181018 08:39:36.995460 85787 storage/store_snapshot.go:664 [s1,r2/1:/{System/Max-Max}] streamed snapshot to (n4,s4):?: kv pairs: 48, log entries: 6, rate-limit: 2.0 MiB/sec, 5ms
[08:42:54][Step 2/2] I181018 08:39:36.997249 86662 storage/replica_raftstorage.go:804 [s4,r2/?:{-}] applying preemptive snapshot at index 16 (id=4e71d8e3, encoded size=9310, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:39:37.000974 86662 storage/replica_raftstorage.go:810 [s4,r2/?:/{System/Max-Max}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:37.004491 85787 storage/replica_command.go:816 [s1,r2/1:/{System/Max-Max}] change replicas (ADD_REPLICA (n4,s4):4): read existing descriptor r2:/{System/Max-Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:39:37.038079 85787 storage/replica.go:3884 [s1,r2/1:/{System/Max-Max},txn=0f0a6292] proposing ADD_REPLICA((n4,s4):4): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3 (n4,s4):4] next=5
[08:42:54][Step 2/2] W181018 08:39:37.053365 85787 storage/replica.go:3339 [s1,r2/1:/{System/Max-Max},txn=0f0a6292] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] W181018 08:39:37.080683 86614 storage/replica.go:3112 [hb,txn=f5153aaa,range-lookup=/System/NodeLiveness/2,s1,r1/1:/{Min-System/Max}] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] W181018 08:39:37.138459 86174 storage/replica.go:3112 [hb,txn=a81f8eba,range-lookup=/System/NodeLiveness/3,s1,r1/1:/{Min-System/Max}] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] W181018 08:39:37.196171 86509 storage/replica.go:3112 [hb,txn=edcf803c,range-lookup=/System/NodeLiveness/4,s1,r1/1:/{Min-System/Max}] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:37.216224 86074 storage/replica_proposal.go:212 [s2,r2/2:/{System/Max-Max}] new range lease repl=(n2,s2):2 seq=2 start=0.000000123,639 epo=1 pro=0.000000123,640 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
[08:42:54][Step 2/2] W181018 08:39:37.248394 86694 storage/replica.go:3112 [hb,txn=8eb6e484,range-lookup=/System/NodeLiveness/5,s1,r1/1:/{Min-System/Max}] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] W181018 08:39:37.270938 86512 storage/replica.go:3112 [range-lookup=/System/Max,s1,r1/1:/{Min-System/Max}] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:37.274518 85787 storage/replica_command.go:816 [s2,r2/2:/{System/Max-Max}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r2:/{System/Max-Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, (n4,s4):4, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:39:37.296285 85787 storage/replica.go:3884 [s2,r2/2:/{System/Max-Max},txn=e9623c6f] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n4,s4):4 (n2,s2):2 (n3,s3):3] next=5
[08:42:54][Step 2/2] W181018 08:39:37.304846 86618 storage/replica.go:3112 [hb,txn=95b260f9,range-lookup=/System/NodeLiveness/6,s1,r1/1:/{Min-System/Max}] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] W181018 08:39:37.305725 85787 storage/replica.go:3339 [s2,r2/2:/{System/Max-Max},txn=e9623c6f] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:37.309788 86639 storage/store.go:3640 [s1,r2/1:/{System/Max-Max}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:39:37.320691 86649 storage/store.go:2580 [replicaGC,s1,r2/1:/{System/Max-Max}] removing replica r2/1
[08:42:54][Step 2/2] I181018 08:39:37.322644 86649 storage/replica.go:863 [replicaGC,s1,r2/1:/{System/Max-Max}] removed 47 (38+9) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:37.340280 85787 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:54][Step 2/2] I181018 08:39:37.375787 85787 storage/replica_command.go:300 [s2,r2/2:/{System/Max-Max}] initiating a split of this range at key "m" [r11]
[08:42:54][Step 2/2] W181018 08:39:37.422526 85787 storage/replica.go:3339 [s2,r2/2:{/System/Max-m},txn=df4a3965] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:37.431883 85787 storage/store_snapshot.go:621 [s2,r11/2:{m-/Max}] sending preemptive snapshot 159ae11c at applied index 10
[08:42:54][Step 2/2] I181018 08:39:37.433394 85787 storage/store_snapshot.go:664 [s2,r11/2:{m-/Max}] streamed snapshot to (n5,s5):?: kv pairs: 43, log entries: 0, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:37.436372 86708 storage/replica_raftstorage.go:804 [s5,r11/?:{-}] applying preemptive snapshot at index 10 (id=159ae11c, encoded size=7488, 1 rocksdb batches, 0 log entries)
[08:42:54][Step 2/2] I181018 08:39:37.438231 86708 storage/replica_raftstorage.go:810 [s5,r11/?:{m-/Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:37.442688 85787 storage/replica_command.go:816 [s2,r11/2:{m-/Max}] change replicas (ADD_REPLICA (n5,s5):4): read existing descriptor r11:{m-/Max} [(n4,s4):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:39:37.475293 85787 storage/replica.go:3884 [s2,r11/2:{m-/Max},txn=7d86b954] proposing ADD_REPLICA((n5,s5):4): updated=[(n4,s4):1 (n2,s2):2 (n3,s3):3 (n5,s5):4] next=5
[08:42:54][Step 2/2] W181018 08:39:37.483497 85787 storage/replica.go:3339 [s2,r11/2:{m-/Max},txn=7d86b954] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:37.491229 85787 storage/store_snapshot.go:621 [s2,r11/2:{m-/Max}] sending preemptive snapshot 1dc6524c at applied index 14
[08:42:54][Step 2/2] I181018 08:39:37.492512 85787 storage/store_snapshot.go:664 [s2,r11/2:{m-/Max}] streamed snapshot to (n6,s6):?: kv pairs: 45, log entries: 4, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:37.493997 86652 storage/replica_raftstorage.go:804 [s6,r11/?:{-}] applying preemptive snapshot at index 14 (id=1dc6524c, encoded size=8915, 1 rocksdb batches, 4 log entries)
[08:42:54][Step 2/2] I181018 08:39:37.496325 86652 storage/replica_raftstorage.go:810 [s6,r11/?:{m-/Max}] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:37.499902 85787 storage/replica_command.go:816 [s2,r11/2:{m-/Max}] change replicas (ADD_REPLICA (n6,s6):5): read existing descriptor r11:{m-/Max} [(n4,s4):1, (n2,s2):2, (n3,s3):3, (n5,s5):4, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:39:37.522115 85787 storage/replica.go:3884 [s2,r11/2:{m-/Max},txn=7a8fd921] proposing ADD_REPLICA((n6,s6):5): updated=[(n4,s4):1 (n2,s2):2 (n3,s3):3 (n5,s5):4 (n6,s6):5] next=6
[08:42:54][Step 2/2] W181018 08:39:37.528628 85787 storage/replica.go:3339 [s2,r11/2:{m-/Max},txn=7a8fd921] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:37.805421 85787 storage/replica_command.go:816 [s2,r11/2:{m-/Max}] change replicas (REMOVE_REPLICA (n3,s3):3): read existing descriptor r11:{m-/Max} [(n4,s4):1, (n2,s2):2, (n3,s3):3, (n5,s5):4, (n6,s6):5, next=6, gen=0]
[08:42:54][Step 2/2] I181018 08:39:37.827561 85787 storage/replica.go:3884 [s2,r11/2:{m-/Max},txn=b9d54f89] proposing REMOVE_REPLICA((n3,s3):3): updated=[(n4,s4):1 (n2,s2):2 (n6,s6):5 (n5,s5):4] next=6
[08:42:54][Step 2/2] W181018 08:39:37.836284 85787 storage/replica.go:3339 [s2,r11/2:{m-/Max},txn=b9d54f89] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:37.841202 86639 storage/store.go:3640 [s3,r11/3:{m-/Max}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:39:37.862800 86412 storage/replica_proposal.go:212 [s5,r11/4:{m-/Max}] new range lease repl=(n5,s5):4 seq=3 start=1.800000125,499 epo=1 pro=1.800000125,501 following repl=(n2,s2):2 seq=2 start=0.000000123,639 epo=1 pro=1.800000125,5
[08:42:54][Step 2/2] W181018 08:39:37.869606 86726 storage/replica.go:3112 [range-lookup="m",s1,r1/1:/{Min-System/Max}] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:37.878862 85787 storage/replica_command.go:816 [s5,r11/4:{m-/Max}] change replicas (REMOVE_REPLICA (n2,s2):2): read existing descriptor r11:{m-/Max} [(n4,s4):1, (n2,s2):2, (n6,s6):5, (n5,s5):4, next=6, gen=0]
[08:42:54][Step 2/2] I181018 08:39:37.909976 86639 storage/store.go:3640 [s3,r11/3:{m-/Max}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:39:37.910208 86400 storage/store.go:2580 [replicaGC,s3,r11/3:{m-/Max}] removing replica r11/3
[08:42:54][Step 2/2] I181018 08:39:37.911940 86400 storage/replica.go:863 [replicaGC,s3,r11/3:{m-/Max}] removed 45 (37+8) keys in 1ms [clear=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:37.919145 85787 storage/replica.go:3884 [s5,r11/4:{m-/Max},txn=3b0ef178] proposing REMOVE_REPLICA((n2,s2):2): updated=[(n4,s4):1 (n5,s5):4 (n6,s6):5] next=6
[08:42:54][Step 2/2] W181018 08:39:37.925367 85787 storage/replica.go:3339 [s5,r11/4:{m-/Max},txn=3b0ef178] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:37.926836 86178 storage/store.go:3640 [s2,r11/2:{m-/Max}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:39:37.938120 86626 storage/store.go:2580 [replicaGC,s2,r11/2:{m-/Max}] removing replica r11/2
[08:42:54][Step 2/2] I181018 08:39:37.939852 86626 storage/replica.go:863 [replicaGC,s2,r11/2:{m-/Max}] removed 46 (37+9) keys in 1ms [clear=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:38.106240 86717 storage/replica_proposal.go:212 [s2,r2/2:{/System/Max-m}] new range lease repl=(n2,s2):2 seq=3 start=1.800000125,746 epo=2 pro=1.800000125,922 following repl=(n2,s2):2 seq=2 start=0.000000123,639 epo=1 pro=1.800000125,5
[08:42:54][Step 2/2] I181018 08:39:38.182329 86812 storage/replica_proposal.go:212 [s5,r11/4:{m-/Max}] new range lease repl=(n5,s5):4 seq=4 start=1.800000125,1011 epo=1 pro=1.800000125,1020 following repl=(n5,s5):4 seq=3 start=1.800000125,499 epo=1 pro=1.800000125,501
[08:42:54][Step 2/2] W181018 08:39:38.403670 86617 storage/raft_transport.go:584 while processing outgoing Raft queue to node 4: EOF:
[08:42:54][Step 2/2] W181018 08:39:38.406131 86640 storage/raft_transport.go:584 while processing outgoing Raft queue to node 1: EOF:
[08:42:54][Step 2/2] --- PASS: TestSplitSnapshotRace_SplitWins (1.92s)
[08:42:54][Step 2/2] === RUN TestSplitSnapshotRace_SnapshotWins
[08:42:54][Step 2/2] I181018 08:39:38.490886 87219 gossip/gossip.go:393 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:43375" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] W181018 08:39:38.535191 87219 gossip/gossip.go:1496 [n2] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:38.536110 87219 gossip/gossip.go:393 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:33395" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:38.539561 87325 gossip/client.go:129 [n2] started gossip client to 127.0.0.1:43375
[08:42:54][Step 2/2] W181018 08:39:38.589759 87219 gossip/gossip.go:1496 [n3] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:38.591552 87219 gossip/gossip.go:393 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:35175" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:38.594131 87548 gossip/client.go:129 [n3] started gossip client to 127.0.0.1:43375
[08:42:54][Step 2/2] W181018 08:39:38.697333 87219 gossip/gossip.go:1496 [n4] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:38.698505 87219 gossip/gossip.go:393 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:42241" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:38.700285 87653 gossip/client.go:129 [n4] started gossip client to 127.0.0.1:43375
[08:42:54][Step 2/2] W181018 08:39:38.749663 87219 gossip/gossip.go:1496 [n5] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:38.751011 87219 gossip/gossip.go:393 [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:46763" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:38.754744 87696 gossip/client.go:129 [n5] started gossip client to 127.0.0.1:43375
[08:42:54][Step 2/2] I181018 08:39:38.758435 87697 gossip/server.go:282 [n1] refusing gossip from n5 (max 3 conns); forwarding to n4 ({tcp 127.0.0.1:42241})
[08:42:54][Step 2/2] E181018 08:39:38.761051 87629 gossip/gossip.go:1256 [n5] unable to get address for n4: unable to look up descriptor for n4
[08:42:54][Step 2/2] I181018 08:39:38.761577 87696 gossip/client.go:134 [n5] closing client to n1 (127.0.0.1:43375): received forward from n1 to 4 (127.0.0.1:42241)
[08:42:54][Step 2/2] I181018 08:39:38.764409 87658 gossip/client.go:129 [n5] started gossip client to 127.0.0.1:42241
[08:42:54][Step 2/2] W181018 08:39:38.862904 87219 gossip/gossip.go:1496 [n6] no incoming or outgoing connections
[08:42:54][Step 2/2] I181018 08:39:38.864154 87219 gossip/gossip.go:393 [n6] NodeDescriptor set to node_id:6 address:<network_field:"tcp" address_field:"127.0.0.1:33229" > attrs:<> locality:<> ServerVersion:<major_val:0 minor_val:0 patch:0 unstable:0 > build_tag:"" started_at:0
[08:42:54][Step 2/2] I181018 08:39:38.866039 87669 gossip/client.go:129 [n6] started gossip client to 127.0.0.1:43375
[08:42:54][Step 2/2] I181018 08:39:38.868303 87892 gossip/server.go:282 [n1] refusing gossip from n6 (max 3 conns); forwarding to n4 ({tcp 127.0.0.1:42241})
[08:42:54][Step 2/2] I181018 08:39:38.879371 87669 gossip/client.go:134 [n6] closing client to n1 (127.0.0.1:43375): received forward from n1 to 4 (127.0.0.1:42241)
[08:42:54][Step 2/2] I181018 08:39:38.883711 87775 gossip/gossip.go:1510 [n6] node has connected to cluster via gossip
[08:42:54][Step 2/2] I181018 08:39:38.889959 87671 gossip/client.go:129 [n6] started gossip client to 127.0.0.1:42241
[08:42:54][Step 2/2] I181018 08:39:38.896762 87219 storage/replica_command.go:300 [s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/Max [r2]
[08:42:54][Step 2/2] I181018 08:39:38.925466 87219 storage/store_snapshot.go:621 [s1,r2/1:/{System/Max-Max}] sending preemptive snapshot 2b81c6dd at applied index 11
[08:42:54][Step 2/2] I181018 08:39:38.927006 87219 storage/store_snapshot.go:664 [s1,r2/1:/{System/Max-Max}] streamed snapshot to (n2,s2):?: kv pairs: 44, log entries: 1, rate-limit: 2.0 MiB/sec, 4ms
[08:42:54][Step 2/2] I181018 08:39:38.928545 87432 storage/replica_raftstorage.go:804 [s2,r2/?:{-}] applying preemptive snapshot at index 11 (id=2b81c6dd, encoded size=7511, 1 rocksdb batches, 1 log entries)
[08:42:54][Step 2/2] I181018 08:39:38.930298 87432 storage/replica_raftstorage.go:810 [s2,r2/?:/{System/Max-Max}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:38.941189 87219 storage/replica_command.go:816 [s1,r2/1:/{System/Max-Max}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r2:/{System/Max-Max} [(n1,s1):1, next=2, gen=0]
[08:42:54][Step 2/2] I181018 08:39:38.964032 87219 storage/replica.go:3884 [s1,r2/1:/{System/Max-Max},txn=1dbeaf2c] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
[08:42:54][Step 2/2] W181018 08:39:38.966739 87219 storage/replica.go:3339 [s1,r2/1:/{System/Max-Max},txn=1dbeaf2c] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:38.972379 87219 storage/store_snapshot.go:621 [s1,r2/1:/{System/Max-Max}] sending preemptive snapshot 522d5a69 at applied index 14
[08:42:54][Step 2/2] I181018 08:39:38.973539 87219 storage/store_snapshot.go:664 [s1,r2/1:/{System/Max-Max}] streamed snapshot to (n3,s3):?: kv pairs: 46, log entries: 4, rate-limit: 2.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:39:38.975590 87674 storage/replica_raftstorage.go:804 [s3,r2/?:{-}] applying preemptive snapshot at index 14 (id=522d5a69, encoded size=8404, 1 rocksdb batches, 4 log entries)
[08:42:54][Step 2/2] I181018 08:39:38.978790 87674 storage/replica_raftstorage.go:810 [s3,r2/?:/{System/Max-Max}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:38.982385 87219 storage/replica_command.go:816 [s1,r2/1:/{System/Max-Max}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r2:/{System/Max-Max} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
[08:42:54][Step 2/2] W181018 08:39:38.999121 87802 storage/replica.go:3112 [hb,txn=9f9e1486,range-lookup=/System/NodeLiveness/2,s1,r1/1:/{Min-System/Max}] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:39.003475 87219 storage/replica.go:3884 [s1,r2/1:/{System/Max-Max},txn=9f85b5c0] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
[08:42:54][Step 2/2] W181018 08:39:39.010426 87219 storage/replica.go:3339 [s1,r2/1:/{System/Max-Max},txn=9f85b5c0] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:39.016606 87219 storage/store_snapshot.go:621 [s1,r2/1:/{System/Max-Max}] sending preemptive snapshot 2de3e253 at applied index 16
[08:42:54][Step 2/2] I181018 08:39:39.018047 87219 storage/store_snapshot.go:664 [s1,r2/1:/{System/Max-Max}] streamed snapshot to (n4,s4):?: kv pairs: 48, log entries: 6, rate-limit: 2.0 MiB/sec, 3ms
[08:42:54][Step 2/2] I181018 08:39:39.019528 87909 storage/replica_raftstorage.go:804 [s4,r2/?:{-}] applying preemptive snapshot at index 16 (id=2de3e253, encoded size=9310, 1 rocksdb batches, 6 log entries)
[08:42:54][Step 2/2] I181018 08:39:39.022545 87909 storage/replica_raftstorage.go:810 [s4,r2/?:/{System/Max-Max}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:39.025358 87219 storage/replica_command.go:816 [s1,r2/1:/{System/Max-Max}] change replicas (ADD_REPLICA (n4,s4):4): read existing descriptor r2:/{System/Max-Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
[08:42:54][Step 2/2] I181018 08:39:39.051892 87219 storage/replica.go:3884 [s1,r2/1:/{System/Max-Max},txn=d3eaac38] proposing ADD_REPLICA((n4,s4):4): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3 (n4,s4):4] next=5
[08:42:54][Step 2/2] W181018 08:39:39.054339 87872 storage/replica.go:3112 [hb,txn=8ec5b07c,range-lookup=/System/NodeLiveness/3,s1,r1/1:/{Min-System/Max}] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] W181018 08:39:39.057976 87219 storage/replica.go:3339 [s1,r2/1:/{System/Max-Max},txn=d3eaac38] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] W181018 08:39:39.162381 87942 storage/replica.go:3112 [hb,txn=a6527527,range-lookup=/System/NodeLiveness/4,s1,r1/1:/{Min-System/Max}] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] W181018 08:39:39.216368 87943 storage/replica.go:3112 [hb,txn=925dcadd,range-lookup=/System/NodeLiveness/5,s1,r1/1:/{Min-System/Max}] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:39.220522 87358 storage/replica_proposal.go:212 [s2,r2/2:/{System/Max-Max}] new range lease repl=(n2,s2):2 seq=2 start=0.000000123,668 epo=1 pro=0.000000123,669 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,6 pro=0.000000123,8
[08:42:54][Step 2/2] W181018 08:39:39.274835 87946 storage/replica.go:3112 [range-lookup=/System/Max,s1,r1/1:/{Min-System/Max}] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:39.279593 87219 storage/replica_command.go:816 [s2,r2/2:/{System/Max-Max}] change replicas (REMOVE_REPLICA (n1,s1):1): read existing descriptor r2:/{System/Max-Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, (n4,s4):4, next=5, gen=0]
[08:42:54][Step 2/2] I181018 08:39:39.299836 87219 storage/replica.go:3884 [s2,r2/2:/{System/Max-Max},txn=23ea07e2] proposing REMOVE_REPLICA((n1,s1):1): updated=[(n4,s4):4 (n2,s2):2 (n3,s3):3] next=5
[08:42:54][Step 2/2] W181018 08:39:39.306882 87219 storage/replica.go:3339 [s2,r2/2:/{System/Max-Max},txn=23ea07e2] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:39.312671 87896 storage/store.go:3640 [s1,r2/1:/{System/Max-Max}] added to replica GC queue (peer suggestion)
[08:42:54][Step 2/2] I181018 08:39:39.322192 87903 storage/store.go:2580 [replicaGC,s1,r2/1:/{System/Max-Max}] removing replica r2/1
[08:42:54][Step 2/2] I181018 08:39:39.324004 87903 storage/replica.go:863 [replicaGC,s1,r2/1:/{System/Max-Max}] removed 47 (38+9) keys in 1ms [clear=1ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:39.342781 87219 storage/client_test.go:1252 test clock advanced to: 1.800000125,0
[08:42:54][Step 2/2] I181018 08:39:39.375913 87219 storage/replica_command.go:300 [s2,r2/2:/{System/Max-Max}] initiating a split of this range at key "m" [r11]
[08:42:54][Step 2/2] W181018 08:39:39.429768 87219 storage/replica.go:3339 [s2,r2/2:{/System/Max-m},txn=1baf714a] intents not processed as async resolution is disabled
[08:42:54][Step 2/2] I181018 08:39:39.442150 87219 storage/store_snapshot.go:621 [s2,r11/2:{m-/Max}] sending preemptive snapshot bc97b207 at applied index 10
[08:42:54][Step 2/2] I181018 08:39:39.443802 87219 storage/store_snapshot.go:664 [s2,r11/2:{m-/Max}] streamed snapshot to (n5,s5):?: kv pairs: 43, log entries: 0, rate-limit: 2.0 MiB/sec, 7ms
[08:42:54][Step 2/2] I181018 08:39:39.446294 87971 storage/replica_raftstorage.go:804 [s5,r11/?:{-}] applying preemptive snapshot at index 10 (id=bc97b207, encoded size=7488, 1 rocksdb batches, 0 log entries)
[08:42:54][Step 2/2] I181018 08:39:39.447899 87971 storage/replica_raftstorage.go:810 [s5,r11/?:{m-/Max}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
[08:42:54][Step 2/2] I181018 08:39:39.453445 87219 storage/replica_command.go:816 [s2,r11/2:{m-/Max}] change replicas (ADD_REPLICA (n5,s5):4): read existing descriptor r11:{m-/Max} [(n4,s4):1, (n2,s2):2, (n3,s3):3, next=4, gen=0]
×

×

Pin build

Cancel
×

Add build comment

Cancel
×

Promote Build

×

Add build comment

Cancel
×

Mute test

×

×

Server communication failure

Server is unavailable

Server stopped or communication with the server is not possible due to network failure.

Server shutdown started.

Please relogin to continue your work.

×

Run Custom Build

×

TODO

Loading related builds...
Cancel
×

Responsibility

×

Edit tags

Cancel
×

Loading...

×

Are you sure?

Cancel
×

Cancel