Skip to content

adding disturb command#121

Open
mbrenguier wants to merge 6 commits intobetafrom
beta_starter
Open

adding disturb command#121
mbrenguier wants to merge 6 commits intobetafrom
beta_starter

Conversation

@mbrenguier
Copy link
Contributor

No description provided.

@mbrenguier mbrenguier self-assigned this Feb 14, 2018
@TravisBuddy
Copy link

Travis tests have failed

Hey mbrenguier,
Please read the following log in order to understand the failure reason.
It'll be awesome if you fix what's wrong and commit the changes.

1st Build

vendor/bin/phpcs --standard=./phpcs.xml ./Library/
FILE: /home/travis/build/vpg/disturb/Library/Client/DisturbStarter.php
----------------------------------------------------------------------
FOUND 7 ERRORS AFFECTING 7 LINES
----------------------------------------------------------------------
  7 | ERROR | [ ] Missing doc comment for class DisturbStarter
 22 | ERROR | [x] Expected 2 spaces after parameter name; 3 found
 23 | ERROR | [x] Expected 1 spaces after parameter name; 2 found
 24 | ERROR | [x] Expected 5 spaces after parameter name; 6 found
 25 | ERROR | [x] Expected 3 spaces after parameter name; 4 found
 27 | ERROR | [ ] Missing @return tag in function comment
 41 | ERROR | [x] No space found after comma in function call
----------------------------------------------------------------------
PHPCBF CAN FIX THE 5 MARKED SNIFF VIOLATIONS AUTOMATICALLY
----------------------------------------------------------------------

Time: 1.59 secs; Memory: 10Mb

vendor/phpunit/phpunit/phpunit -c Tests/phpunit.xml
PHPUnit 6.5.6 by Sebastian Bergmann and contributors.

Runtime:       PHP 7.1.11 with Xdebug 2.5.5
Configuration: /home/travis/build/vpg/disturb/Tests/phpunit.xml

2018-02-14 17:38:46 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:46 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:46 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.....2018-02-14 17:38:47 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:47 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
......2018-02-14 17:38:47 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_monitoring
...2018-02-14 17:38:47 [INFO] Connecting to Elastic "https:\/\/badhost" on disturb_monitoring
2018-02-14 17:38:47 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_monitoring
......2018-02-14 17:38:47 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_monitoring
...........2018-02-14 17:38:47 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_monitoring
.2018-02-14 17:38:49 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_monitoring
.2018-02-14 17:38:49 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_monitoring
.2018-02-14 17:38:49 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:38:49 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serieWrongClientClass.json
2018-02-14 17:38:49 [INFO] Setting consumer group to badfoo
.2018-02-14 17:38:49 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serie.json
2018-02-14 17:38:49 [INFO] Setting consumer group to manager
2018-02-14 17:38:49 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:49 [INFO] 🚀 Starting workflow test0.404739001518629929
[2018-02-14 17:38:49,470] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:setData cxid:0x61 zxid:0x29 txntype:-1 reqpath:n/a Error Path:/config/topics/__consumer_offsets Error:KeeperErrorCode = NoNode for /config/topics/__consumer_offsets (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,474] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x62 zxid:0x2a txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,481] INFO Topic creation {"version":1,"partitions":{"45":[0],"34":[0],"12":[0],"8":[0],"19":[0],"23":[0],"4":[0],"40":[0],"15":[0],"11":[0],"9":[0],"44":[0],"33":[0],"22":[0],"26":[0],"37":[0],"13":[0],"46":[0],"24":[0],"35":[0],"16":[0],"5":[0],"10":[0],"48":[0],"21":[0],"43":[0],"32":[0],"49":[0],"6":[0],"36":[0],"1":[0],"39":[0],"17":[0],"25":[0],"14":[0],"47":[0],"31":[0],"42":[0],"0":[0],"20":[0],"27":[0],"2":[0],"38":[0],"18":[0],"30":[0],"7":[0],"29":[0],"41":[0],"3":[0],"28":[0]}} (kafka.admin.AdminUtils$)
[2018-02-14 17:38:49,489] INFO [KafkaApi-0] Auto creation of topic __consumer_offsets with 50 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
2018-02-14 17:38:49 [INFO] Nb job(s) to run for foo : 1
[2018-02-14 17:38:49,646] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x9b zxid:0x2d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/19 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/19 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,660] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x9c zxid:0x2e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,677] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xa0 zxid:0x32 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/30 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/30 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,684] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xa3 zxid:0x35 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/47 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/47 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,691] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xa6 zxid:0x38 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/29 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/29 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,698] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xa9 zxid:0x3b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/41 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/41 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,707] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xac zxid:0x3e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/39 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/39 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,714] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xaf zxid:0x41 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/10 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/10 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,722] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xb2 zxid:0x44 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/17 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/17 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,731] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xb5 zxid:0x47 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/14 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/14 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,737] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xb8 zxid:0x4a txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/40 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/40 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,746] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xbb zxid:0x4d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/18 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/18 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,751] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xbe zxid:0x50 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/26 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/26 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,757] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xc1 zxid:0x53 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,764] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xc4 zxid:0x56 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/24 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/24 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,781] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xc7 zxid:0x59 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/33 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/33 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,794] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xca zxid:0x5c txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/20 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/20 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,799] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xcd zxid:0x5f txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/21 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/21 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,803] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xd0 zxid:0x62 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/3 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/3 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,810] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xd3 zxid:0x65 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/5 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/5 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,816] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xd6 zxid:0x68 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/22 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/22 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,821] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xd9 zxid:0x6b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/12 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/12 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,826] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xdc zxid:0x6e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/8 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/8 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,831] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xdf zxid:0x71 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/23 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/23 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,839] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xe2 zxid:0x74 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/15 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/15 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,858] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xe5 zxid:0x77 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/48 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/48 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,864] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xe8 zxid:0x7a txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/11 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/11 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,868] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xeb zxid:0x7d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/13 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/13 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,876] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xee zxid:0x80 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/49 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/49 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,884] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xf1 zxid:0x83 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/6 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/6 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,891] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xf4 zxid:0x86 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/28 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/28 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,897] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xf7 zxid:0x89 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/4 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/4 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,901] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xfa zxid:0x8c txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/37 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/37 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,906] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0xfd zxid:0x8f txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/31 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/31 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,912] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x100 zxid:0x92 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/44 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/44 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,916] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x103 zxid:0x95 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/42 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/42 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,921] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x106 zxid:0x98 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/34 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/34 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,926] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x109 zxid:0x9b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/46 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/46 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,930] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x10c zxid:0x9e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/25 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/25 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,935] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x10f zxid:0xa1 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/45 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/45 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,939] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x112 zxid:0xa4 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/27 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/27 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,944] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x115 zxid:0xa7 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/32 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/32 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,948] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x118 zxid:0xaa txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/43 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/43 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,953] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x11b zxid:0xad txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/36 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/36 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,960] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x11e zxid:0xb0 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/35 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/35 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,964] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x121 zxid:0xb3 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/7 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/7 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,967] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x124 zxid:0xb6 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/9 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/9 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,977] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x127 zxid:0xb9 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/38 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/38 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,981] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x12a zxid:0xbc txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/1 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/1 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,985] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x12d zxid:0xbf txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/16 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/16 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:49,989] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x130 zxid:0xc2 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/2 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/2 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:50,284] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-38,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-13,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.server.ReplicaFetcherManager)
[2018-02-14 17:38:50,301] INFO Loading producer state from offset 0 for partition __consumer_offsets-0 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,306] INFO Completed load of log __consumer_offsets-0 with 1 log segments, log start offset 0 and log end offset 0 in 12 ms (kafka.log.Log)
[2018-02-14 17:38:50,315] INFO Created log for partition [__consumer_offsets,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,316] INFO [Partition __consumer_offsets-0 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition)
[2018-02-14 17:38:50,319] INFO Replica loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,320] INFO [Partition __consumer_offsets-0 broker=0] __consumer_offsets-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,335] INFO Loading producer state from offset 0 for partition __consumer_offsets-29 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,347] INFO Completed load of log __consumer_offsets-29 with 1 log segments, log start offset 0 and log end offset 0 in 17 ms (kafka.log.Log)
[2018-02-14 17:38:50,348] INFO Created log for partition [__consumer_offsets,29] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,348] INFO [Partition __consumer_offsets-29 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition)
[2018-02-14 17:38:50,348] INFO Replica loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,348] INFO [Partition __consumer_offsets-29 broker=0] __consumer_offsets-29 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,361] INFO Loading producer state from offset 0 for partition __consumer_offsets-48 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,365] INFO Completed load of log __consumer_offsets-48 with 1 log segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2018-02-14 17:38:50,370] INFO Created log for partition [__consumer_offsets,48] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,371] INFO [Partition __consumer_offsets-48 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition)
[2018-02-14 17:38:50,374] INFO Replica loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,375] INFO [Partition __consumer_offsets-48 broker=0] __consumer_offsets-48 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,389] INFO Loading producer state from offset 0 for partition __consumer_offsets-10 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,391] INFO Completed load of log __consumer_offsets-10 with 1 log segments, log start offset 0 and log end offset 0 in 10 ms (kafka.log.Log)
[2018-02-14 17:38:50,397] INFO Created log for partition [__consumer_offsets,10] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,397] INFO [Partition __consumer_offsets-10 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition)
[2018-02-14 17:38:50,400] INFO Replica loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,400] INFO [Partition __consumer_offsets-10 broker=0] __consumer_offsets-10 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,408] INFO Loading producer state from offset 0 for partition __consumer_offsets-45 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,420] INFO Completed load of log __consumer_offsets-45 with 1 log segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
[2018-02-14 17:38:50,421] INFO Created log for partition [__consumer_offsets,45] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,421] INFO [Partition __consumer_offsets-45 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition)
[2018-02-14 17:38:50,422] INFO Replica loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,422] INFO [Partition __consumer_offsets-45 broker=0] __consumer_offsets-45 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,430] INFO Loading producer state from offset 0 for partition __consumer_offsets-26 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,445] INFO Completed load of log __consumer_offsets-26 with 1 log segments, log start offset 0 and log end offset 0 in 9 ms (kafka.log.Log)
[2018-02-14 17:38:50,445] INFO Created log for partition [__consumer_offsets,26] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,446] INFO [Partition __consumer_offsets-26 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition)
[2018-02-14 17:38:50,446] INFO Replica loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,446] INFO [Partition __consumer_offsets-26 broker=0] __consumer_offsets-26 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,461] INFO Loading producer state from offset 0 for partition __consumer_offsets-7 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,463] INFO Completed load of log __consumer_offsets-7 with 1 log segments, log start offset 0 and log end offset 0 in 11 ms (kafka.log.Log)
[2018-02-14 17:38:50,466] INFO Created log for partition [__consumer_offsets,7] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,467] INFO [Partition __consumer_offsets-7 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition)
[2018-02-14 17:38:50,467] INFO Replica loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,467] INFO [Partition __consumer_offsets-7 broker=0] __consumer_offsets-7 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,472] INFO Loading producer state from offset 0 for partition __consumer_offsets-42 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,473] INFO Completed load of log __consumer_offsets-42 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:50,484] INFO Created log for partition [__consumer_offsets,42] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,484] INFO [Partition __consumer_offsets-42 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition)
[2018-02-14 17:38:50,484] INFO Replica loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,484] INFO [Partition __consumer_offsets-42 broker=0] __consumer_offsets-42 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,500] INFO Loading producer state from offset 0 for partition __consumer_offsets-4 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,500] INFO Completed load of log __consumer_offsets-4 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:50,503] INFO Created log for partition [__consumer_offsets,4] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,512] INFO [Partition __consumer_offsets-4 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition)
[2018-02-14 17:38:50,515] INFO Replica loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,515] INFO [Partition __consumer_offsets-4 broker=0] __consumer_offsets-4 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,519] INFO Loading producer state from offset 0 for partition __consumer_offsets-23 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,523] INFO Completed load of log __consumer_offsets-23 with 1 log segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2018-02-14 17:38:50,524] INFO Created log for partition [__consumer_offsets,23] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,524] INFO [Partition __consumer_offsets-23 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition)
[2018-02-14 17:38:50,525] INFO Replica loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,525] INFO [Partition __consumer_offsets-23 broker=0] __consumer_offsets-23 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,528] INFO Loading producer state from offset 0 for partition __consumer_offsets-1 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,533] INFO Completed load of log __consumer_offsets-1 with 1 log segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2018-02-14 17:38:50,533] INFO Created log for partition [__consumer_offsets,1] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,534] INFO [Partition __consumer_offsets-1 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,534] INFO Replica loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,537] INFO [Partition __consumer_offsets-1 broker=0] __consumer_offsets-1 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,548] INFO Loading producer state from offset 0 for partition __consumer_offsets-20 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,548] INFO Completed load of log __consumer_offsets-20 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:50,550] INFO Created log for partition [__consumer_offsets,20] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,550] INFO [Partition __consumer_offsets-20 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition)
[2018-02-14 17:38:50,550] INFO Replica loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,550] INFO [Partition __consumer_offsets-20 broker=0] __consumer_offsets-20 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,562] INFO Loading producer state from offset 0 for partition __consumer_offsets-39 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,564] INFO Completed load of log __consumer_offsets-39 with 1 log segments, log start offset 0 and log end offset 0 in 9 ms (kafka.log.Log)
2018-02-14 17:38:50 [INFO] Ask job #0 for test0.404739001518629929 : foo
[2018-02-14 17:38:50,583] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:setData cxid:0x143 zxid:0xc5 txntype:-1 reqpath:n/a Error Path:/config/topics/disturb-test-foo-step Error:KeeperErrorCode = NoNode for /config/topics/disturb-test-foo-step (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:50,585] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x144 zxid:0xc6 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:50,587] INFO Created log for partition [__consumer_offsets,39] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,588] INFO Topic creation {"version":1,"partitions":{"0":[0]}} (kafka.admin.AdminUtils$)
[2018-02-14 17:38:50,590] INFO [Partition __consumer_offsets-39 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition)
[2018-02-14 17:38:50,595] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x14c zxid:0xc9 txntype:-1 reqpath:n/a Error Path:/brokers/topics/disturb-test-foo-step/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/disturb-test-foo-step/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:50,592] INFO [KafkaApi-0] Auto creation of topic disturb-test-foo-step with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
[2018-02-14 17:38:50,599] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x14d zxid:0xca txntype:-1 reqpath:n/a Error Path:/brokers/topics/disturb-test-foo-step/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/disturb-test-foo-step/partitions (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:50,606] INFO Replica loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,606] INFO [Partition __consumer_offsets-39 broker=0] __consumer_offsets-39 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,612] INFO Loading producer state from offset 0 for partition __consumer_offsets-17 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,612] INFO Completed load of log __consumer_offsets-17 with 1 log segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2018-02-14 17:38:50,613] INFO Created log for partition [__consumer_offsets,17] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,613] INFO [Partition __consumer_offsets-17 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition)
[2018-02-14 17:38:50,613] INFO Replica loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,614] INFO [Partition __consumer_offsets-17 broker=0] __consumer_offsets-17 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,633] INFO Loading producer state from offset 0 for partition __consumer_offsets-36 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,634] INFO Completed load of log __consumer_offsets-36 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:50,647] INFO Created log for partition [__consumer_offsets,36] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,647] INFO [Partition __consumer_offsets-36 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition)
[2018-02-14 17:38:50,647] INFO Replica loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,647] INFO [Partition __consumer_offsets-36 broker=0] __consumer_offsets-36 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
2018-02-14 17:38:50 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serie.json
2018-02-14 17:38:50 [INFO] Setting consumer group to foo
[2018-02-14 17:38:50,673] INFO Loading producer state from offset 0 for partition __consumer_offsets-14 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,674] INFO Completed load of log __consumer_offsets-14 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
2018-02-14 17:38:50 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
[2018-02-14 17:38:50,704] INFO Created log for partition [__consumer_offsets,14] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,704] INFO [Partition __consumer_offsets-14 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition)
[2018-02-14 17:38:50,704] INFO Replica loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,705] INFO [Partition __consumer_offsets-14 broker=0] __consumer_offsets-14 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,710] INFO Loading producer state from offset 0 for partition __consumer_offsets-33 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,710] INFO Completed load of log __consumer_offsets-33 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:50,722] INFO Created log for partition [__consumer_offsets,33] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,722] INFO [Partition __consumer_offsets-33 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition)
[2018-02-14 17:38:50,722] INFO Replica loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,722] INFO [Partition __consumer_offsets-33 broker=0] __consumer_offsets-33 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,726] INFO Loading producer state from offset 0 for partition __consumer_offsets-49 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,726] INFO Completed load of log __consumer_offsets-49 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:50,727] INFO Created log for partition [__consumer_offsets,49] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,727] INFO [Partition __consumer_offsets-49 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition)
[2018-02-14 17:38:50,727] INFO Replica loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,727] INFO [Partition __consumer_offsets-49 broker=0] __consumer_offsets-49 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
2018-02-14 17:38:50 [INFO] messageDto : {"id":"test0.404739001518629929","type":"STEP-CTRL","stepCode":"foo","jobId":"1","action":"start","payload":{"foo":"bar0"}}
[2018-02-14 17:38:50,768] INFO Loading producer state from offset 0 for partition __consumer_offsets-11 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,769] INFO Completed load of log __consumer_offsets-11 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:50,770] INFO Created log for partition [__consumer_offsets,11] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,770] INFO [Partition __consumer_offsets-11 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition)
[2018-02-14 17:38:50,770] INFO Replica loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,770] INFO [Partition __consumer_offsets-11 broker=0] __consumer_offsets-11 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,785] INFO Loading producer state from offset 0 for partition __consumer_offsets-30 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,801] INFO Completed load of log __consumer_offsets-30 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:50,807] INFO Created log for partition [__consumer_offsets,30] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,820] INFO [Partition __consumer_offsets-30 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition)
[2018-02-14 17:38:50,820] INFO Replica loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,820] INFO [Partition __consumer_offsets-30 broker=0] __consumer_offsets-30 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,831] INFO Loading producer state from offset 0 for partition __consumer_offsets-46 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,833] INFO Completed load of log __consumer_offsets-46 with 1 log segments, log start offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
[2018-02-14 17:38:50,835] INFO Created log for partition [__consumer_offsets,46] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,840] INFO [Partition __consumer_offsets-46 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition)
[2018-02-14 17:38:50,841] INFO Replica loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,841] INFO [Partition __consumer_offsets-46 broker=0] __consumer_offsets-46 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,844] INFO Loading producer state from offset 0 for partition __consumer_offsets-27 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,854] INFO Completed load of log __consumer_offsets-27 with 1 log segments, log start offset 0 and log end offset 0 in 11 ms (kafka.log.Log)
[2018-02-14 17:38:50,855] INFO Created log for partition [__consumer_offsets,27] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,860] INFO [Partition __consumer_offsets-27 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition)
[2018-02-14 17:38:50,860] INFO Replica loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,860] INFO [Partition __consumer_offsets-27 broker=0] __consumer_offsets-27 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,873] INFO Loading producer state from offset 0 for partition __consumer_offsets-8 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,873] INFO Completed load of log __consumer_offsets-8 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:50,883] INFO Created log for partition [__consumer_offsets,8] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,884] INFO [Partition __consumer_offsets-8 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition)
[2018-02-14 17:38:50,884] INFO Replica loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,884] INFO [Partition __consumer_offsets-8 broker=0] __consumer_offsets-8 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,890] INFO Loading producer state from offset 0 for partition __consumer_offsets-24 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,892] INFO Completed load of log __consumer_offsets-24 with 1 log segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2018-02-14 17:38:50,896] INFO Created log for partition [__consumer_offsets,24] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,897] INFO [Partition __consumer_offsets-24 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition)
[2018-02-14 17:38:50,897] INFO Replica loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,898] INFO [Partition __consumer_offsets-24 broker=0] __consumer_offsets-24 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,919] INFO Loading producer state from offset 0 for partition __consumer_offsets-43 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,919] INFO Completed load of log __consumer_offsets-43 with 1 log segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
[2018-02-14 17:38:50,927] INFO Created log for partition [__consumer_offsets,43] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,928] INFO [Partition __consumer_offsets-43 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition)
[2018-02-14 17:38:50,928] INFO Replica loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,928] INFO [Partition __consumer_offsets-43 broker=0] __consumer_offsets-43 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,938] INFO Loading producer state from offset 0 for partition __consumer_offsets-5 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,938] INFO Completed load of log __consumer_offsets-5 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:50,939] INFO Created log for partition [__consumer_offsets,5] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,940] INFO [Partition __consumer_offsets-5 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition)
[2018-02-14 17:38:50,940] INFO Replica loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,940] INFO [Partition __consumer_offsets-5 broker=0] __consumer_offsets-5 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,951] INFO Loading producer state from offset 0 for partition __consumer_offsets-21 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,951] INFO Completed load of log __consumer_offsets-21 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:50,952] INFO Created log for partition [__consumer_offsets,21] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,952] INFO [Partition __consumer_offsets-21 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition)
[2018-02-14 17:38:50,953] INFO Replica loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,953] INFO [Partition __consumer_offsets-21 broker=0] __consumer_offsets-21 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,979] INFO Loading producer state from offset 0 for partition __consumer_offsets-2 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,979] INFO Completed load of log __consumer_offsets-2 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:50,980] INFO Created log for partition [__consumer_offsets,2] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:50,990] INFO [Partition __consumer_offsets-2 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition)
[2018-02-14 17:38:50,990] INFO Replica loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:50,990] INFO [Partition __consumer_offsets-2 broker=0] __consumer_offsets-2 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:50,994] INFO Loading producer state from offset 0 for partition __consumer_offsets-40 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:50,994] INFO Completed load of log __consumer_offsets-40 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:50,995] INFO Created log for partition [__consumer_offsets,40] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,005] INFO [Partition __consumer_offsets-40 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition)
[2018-02-14 17:38:51,005] INFO Replica loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,005] INFO [Partition __consumer_offsets-40 broker=0] __consumer_offsets-40 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,021] INFO Loading producer state from offset 0 for partition __consumer_offsets-37 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,027] INFO Completed load of log __consumer_offsets-37 with 1 log segments, log start offset 0 and log end offset 0 in 18 ms (kafka.log.Log)
[2018-02-14 17:38:51,028] INFO Created log for partition [__consumer_offsets,37] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,028] INFO [Partition __consumer_offsets-37 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition)
[2018-02-14 17:38:51,028] INFO Replica loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,028] INFO [Partition __consumer_offsets-37 broker=0] __consumer_offsets-37 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,037] INFO Loading producer state from offset 0 for partition __consumer_offsets-18 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,037] INFO Completed load of log __consumer_offsets-18 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:51,038] INFO Created log for partition [__consumer_offsets,18] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,043] INFO [Partition __consumer_offsets-18 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition)
[2018-02-14 17:38:51,043] INFO Replica loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,043] INFO [Partition __consumer_offsets-18 broker=0] __consumer_offsets-18 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,047] INFO Loading producer state from offset 0 for partition __consumer_offsets-34 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,047] INFO Completed load of log __consumer_offsets-34 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:51,048] INFO Created log for partition [__consumer_offsets,34] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,048] INFO [Partition __consumer_offsets-34 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition)
[2018-02-14 17:38:51,048] INFO Replica loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,049] INFO [Partition __consumer_offsets-34 broker=0] __consumer_offsets-34 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,061] INFO Loading producer state from offset 0 for partition __consumer_offsets-15 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,061] INFO Completed load of log __consumer_offsets-15 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:51,062] INFO Created log for partition [__consumer_offsets,15] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,062] INFO [Partition __consumer_offsets-15 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition)
[2018-02-14 17:38:51,062] INFO Replica loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,063] INFO [Partition __consumer_offsets-15 broker=0] __consumer_offsets-15 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,074] INFO Loading producer state from offset 0 for partition __consumer_offsets-12 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,075] INFO Completed load of log __consumer_offsets-12 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:51,076] INFO Created log for partition [__consumer_offsets,12] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,076] INFO [Partition __consumer_offsets-12 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition)
[2018-02-14 17:38:51,076] INFO Replica loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,076] INFO [Partition __consumer_offsets-12 broker=0] __consumer_offsets-12 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,088] INFO Loading producer state from offset 0 for partition __consumer_offsets-31 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,089] INFO Completed load of log __consumer_offsets-31 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:51,089] INFO Created log for partition [__consumer_offsets,31] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,090] INFO [Partition __consumer_offsets-31 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition)
[2018-02-14 17:38:51,090] INFO Replica loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,090] INFO [Partition __consumer_offsets-31 broker=0] __consumer_offsets-31 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,102] INFO Loading producer state from offset 0 for partition __consumer_offsets-9 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,102] INFO Completed load of log __consumer_offsets-9 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:51,103] INFO Created log for partition [__consumer_offsets,9] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,111] INFO [Partition __consumer_offsets-9 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition)
[2018-02-14 17:38:51,111] INFO Replica loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,111] INFO [Partition __consumer_offsets-9 broker=0] __consumer_offsets-9 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,115] INFO Loading producer state from offset 0 for partition __consumer_offsets-47 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,115] INFO Completed load of log __consumer_offsets-47 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:51,116] INFO Created log for partition [__consumer_offsets,47] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,116] INFO [Partition __consumer_offsets-47 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition)
[2018-02-14 17:38:51,116] INFO Replica loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,117] INFO [Partition __consumer_offsets-47 broker=0] __consumer_offsets-47 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,123] INFO Loading producer state from offset 0 for partition __consumer_offsets-19 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,123] INFO Completed load of log __consumer_offsets-19 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:51,124] INFO Created log for partition [__consumer_offsets,19] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,125] INFO [Partition __consumer_offsets-19 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition)
[2018-02-14 17:38:51,125] INFO Replica loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,125] INFO [Partition __consumer_offsets-19 broker=0] __consumer_offsets-19 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,142] INFO Loading producer state from offset 0 for partition __consumer_offsets-28 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,142] INFO Completed load of log __consumer_offsets-28 with 1 log segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2018-02-14 17:38:51,143] INFO Created log for partition [__consumer_offsets,28] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,144] INFO [Partition __consumer_offsets-28 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition)
[2018-02-14 17:38:51,144] INFO Replica loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,144] INFO [Partition __consumer_offsets-28 broker=0] __consumer_offsets-28 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,156] INFO Loading producer state from offset 0 for partition __consumer_offsets-38 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,156] INFO Completed load of log __consumer_offsets-38 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:51,157] INFO Created log for partition [__consumer_offsets,38] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,157] INFO [Partition __consumer_offsets-38 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition)
[2018-02-14 17:38:51,157] INFO Replica loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,158] INFO [Partition __consumer_offsets-38 broker=0] __consumer_offsets-38 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,164] INFO Loading producer state from offset 0 for partition __consumer_offsets-35 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,165] INFO Completed load of log __consumer_offsets-35 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:51,165] INFO Created log for partition [__consumer_offsets,35] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,166] INFO [Partition __consumer_offsets-35 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition)
[2018-02-14 17:38:51,166] INFO Replica loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,166] INFO [Partition __consumer_offsets-35 broker=0] __consumer_offsets-35 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,178] INFO Loading producer state from offset 0 for partition __consumer_offsets-44 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,178] INFO Completed load of log __consumer_offsets-44 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:51,179] INFO Created log for partition [__consumer_offsets,44] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,179] INFO [Partition __consumer_offsets-44 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition)
[2018-02-14 17:38:51,180] INFO Replica loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,180] INFO [Partition __consumer_offsets-44 broker=0] __consumer_offsets-44 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,192] INFO Loading producer state from offset 0 for partition __consumer_offsets-6 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,192] INFO Completed load of log __consumer_offsets-6 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:51,193] INFO Created log for partition [__consumer_offsets,6] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,204] INFO [Partition __consumer_offsets-6 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition)
[2018-02-14 17:38:51,204] INFO Replica loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,204] INFO [Partition __consumer_offsets-6 broker=0] __consumer_offsets-6 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,212] INFO Loading producer state from offset 0 for partition __consumer_offsets-25 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,213] INFO Completed load of log __consumer_offsets-25 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:51,213] INFO Created log for partition [__consumer_offsets,25] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,214] INFO [Partition __consumer_offsets-25 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition)
[2018-02-14 17:38:51,214] INFO Replica loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,214] INFO [Partition __consumer_offsets-25 broker=0] __consumer_offsets-25 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,216] INFO Loading producer state from offset 0 for partition __consumer_offsets-16 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,216] INFO Completed load of log __consumer_offsets-16 with 1 log segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log)
[2018-02-14 17:38:51,217] INFO Created log for partition [__consumer_offsets,16] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,218] INFO [Partition __consumer_offsets-16 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition)
[2018-02-14 17:38:51,218] INFO Replica loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,218] INFO [Partition __consumer_offsets-16 broker=0] __consumer_offsets-16 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,230] INFO Loading producer state from offset 0 for partition __consumer_offsets-22 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,230] INFO Completed load of log __consumer_offsets-22 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:51,231] INFO Created log for partition [__consumer_offsets,22] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,231] INFO [Partition __consumer_offsets-22 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition)
[2018-02-14 17:38:51,231] INFO Replica loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,231] INFO [Partition __consumer_offsets-22 broker=0] __consumer_offsets-22 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,262] INFO Loading producer state from offset 0 for partition __consumer_offsets-41 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,263] INFO Completed load of log __consumer_offsets-41 with 1 log segments, log start offset 0 and log end offset 0 in 16 ms (kafka.log.Log)
[2018-02-14 17:38:51,264] INFO Created log for partition [__consumer_offsets,41] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,532] INFO [Partition __consumer_offsets-41 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition)
[2018-02-14 17:38:51,532] INFO Replica loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,532] INFO [Partition __consumer_offsets-41 broker=0] __consumer_offsets-41 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,536] INFO Loading producer state from offset 0 for partition __consumer_offsets-32 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,536] INFO Completed load of log __consumer_offsets-32 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:51,537] INFO Created log for partition [__consumer_offsets,32] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,537] INFO [Partition __consumer_offsets-32 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition)
[2018-02-14 17:38:51,537] INFO Replica loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,538] INFO [Partition __consumer_offsets-32 broker=0] __consumer_offsets-32 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,543] INFO Loading producer state from offset 0 for partition __consumer_offsets-3 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,544] INFO Completed load of log __consumer_offsets-3 with 1 log segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2018-02-14 17:38:51,545] INFO Created log for partition [__consumer_offsets,3] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,545] INFO [Partition __consumer_offsets-3 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition)
[2018-02-14 17:38:51,547] INFO Replica loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,547] INFO [Partition __consumer_offsets-3 broker=0] __consumer_offsets-3 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,556] INFO Loading producer state from offset 0 for partition __consumer_offsets-13 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:51,556] INFO Completed load of log __consumer_offsets-13 with 1 log segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
[2018-02-14 17:38:51,560] INFO Created log for partition [__consumer_offsets,13] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:51,560] INFO [Partition __consumer_offsets-13 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition)
[2018-02-14 17:38:51,560] INFO Replica loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:51,560] INFO [Partition __consumer_offsets-13 broker=0] __consumer_offsets-13 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:51,567] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-22 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,567] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-25 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,568] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-28 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,568] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-31 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,568] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-34 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,568] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-37 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,568] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-40 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,568] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-43 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,568] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-46 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,568] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-49 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,568] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-41 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,568] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,572] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,572] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-1 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,572] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-4 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,572] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-7 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,572] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-10 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,572] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-13 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,572] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-16 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,572] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-19 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,572] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-2 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,572] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-5 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,572] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-8 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,572] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-11 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,573] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-14 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,573] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-17 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,573] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-20 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,573] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-22 in 5 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,822] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-25 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,822] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-28 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,824] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-31 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,824] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-34 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,824] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-37 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,824] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,824] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-43 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,824] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,825] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-49 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,825] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,825] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-44 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,825] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-47 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,825] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-1 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,825] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,825] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-7 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,826] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-10 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,826] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-13 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,826] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-16 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,826] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,826] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-2 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,826] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,826] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-8 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,826] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,821] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-23 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-26 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-29 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-32 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-35 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-38 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-3 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-6 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-9 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-12 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-15 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-18 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-21 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-24 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-27 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,827] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-30 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,139] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-33 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,139] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-36 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,139] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-39 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,139] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,139] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-45 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,139] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:51,829] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,140] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,140] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,141] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-23 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,141] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,141] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-29 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,141] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-32 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,141] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,141] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,142] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-0 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,142] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-3 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,142] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-6 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,142] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-9 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,142] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-12 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,142] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,142] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-18 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,142] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-21 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,157] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-24 in 15 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,157] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,158] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-30 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,158] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,158] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,158] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,160] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions disturb-test-foo-step-0 (kafka.server.ReplicaFetcherManager)
[2018-02-14 17:38:52,163] INFO Loading producer state from offset 0 for partition disturb-test-foo-step-0 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:52,163] INFO Completed load of log disturb-test-foo-step-0 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:52,164] INFO Created log for partition [disturb-test-foo-step,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:52,164] INFO [Partition disturb-test-foo-step-0 broker=0] No checkpointed highwatermark is found for partition disturb-test-foo-step-0 (kafka.cluster.Partition)
[2018-02-14 17:38:52,164] INFO Replica loaded for partition disturb-test-foo-step-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:52,164] INFO [Partition disturb-test-foo-step-0 broker=0] disturb-test-foo-step-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:38:52,165] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-42 in 7 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,172] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,172] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:38:52,318] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: disturb-test-manager-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
[2018-02-14 17:38:53,402] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: disturb-test-foo-step-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
...2018-02-14 17:38:53 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:53 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:53 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:53 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:38:53 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:53 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:53 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:53 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
[2018-02-14T17:38:54,017][WARN ][o.e.a.u.UpdateHelper     ] [V3kewjX] Used upsert operation [noop] for script [        def nbStep = ctx._source.steps.size();
        def jobHash = ['reservedBy':params.workerCode, 'executedOn':params.workerHostname];
        // loop over steps
        for (int stepIndex = 0; stepIndex < nbStep; stepIndex++) {
            def step = ctx._source.steps[stepIndex];
            // if its a parrallelized steps node, loop over each
            if (step instanceof List) {
                def nbParallelizedStep = step.size();
                for (int parallelizedStepIndex= 0; parallelizedStepIndex< nbParallelizedStep; parallelizedStepIndex++) {
                    // if the given step is found, look for the given job
                    if (step[parallelizedStepIndex].name == params.stepCode) {
                        def nbJob = step[parallelizedStepIndex]['jobList'].size();
                        for (int jobIndex = 0; jobIndex < nbJob; jobIndex++) {
                            def job = step[parallelizedStepIndex]['jobList'][jobIndex];
                            if (job.id == params.jobId) {
                                // if job's already reserved : noop
                                if (job.containsKey('reservedBy')) {
                                    ctx.op = 'noop';
                                    break;
                                }
                                ctx._source.steps[stepIndex][parallelizedStepIndex]['jobList'][jobIndex]
                                .putAll(jobHash);
                                break;
                            }
                         }
                        break;
                    }
                }
            } else if (step.name == params.stepCode) {
                def nbJob = step.jobList.size();
                for (int jobIndex = 0; jobIndex < nbJob; jobIndex++) {
                    def job = ctx._source.steps[stepIndex]['jobList'][jobIndex];
                    if (job.id == params.jobId) {
                        // if job's already reserved : noop
                        if (job.containsKey('reservedBy')) {
                            ctx.op = 'noop';
                            break;
                        }
                        ctx._source.steps[stepIndex]['jobList'][jobIndex].putAll(jobHash);
                        break;
                    }
                }
            }
        }], doing nothing...
.2018-02-14 17:38:54 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:54 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:54 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:54 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
[2018-02-14T17:38:54,274][WARN ][o.e.a.u.UpdateHelper     ] [V3kewjX] Used upsert operation [noop] for script [        int nbStep = ctx._source.steps.size();
        def jobHash = ['status':params.jobStatus, 'finishedAt':params.jobFinishedAt, 'result':params.jobResult];
        // loop over steps
        for (int stepIndex = 0; stepIndex < nbStep; stepIndex++) {
            def step = ctx._source.steps[stepIndex];
            // if its a parrallelized steps node, loop over each
            if (step instanceof List) {
                int nbParallelizedStep = step.size();
                for (int parallelizedStepIndex= 0; parallelizedStepIndex< nbParallelizedStep; parallelizedStepIndex++) {
                    // if the given step is found, look for the given job
                    if (step[parallelizedStepIndex].name == params.stepCode) {
                        def nbJob = step[parallelizedStepIndex]['jobList'].size();
                        for (int jobIndex = 0; jobIndex < nbJob; jobIndex++) {
                            def job = step[parallelizedStepIndex]['jobList'][jobIndex];
                            if (job.id == params.jobId) {
                                // if job's already finalized : noop
                                if (job.containsKey('finishedAt')) {
                                    ctx.op = 'noop';
                                    break;
                                }
                                ctx._source.steps[stepIndex][parallelizedStepIndex]['jobList'][jobIndex]
                                .putAll(jobHash);
                                break;
                            }
                         }
                        break;
                    }
                }
            } else if (step.name == params.stepCode) {
                int nbJob = step.jobList.size();
                for (int jobIndex = 0; jobIndex < nbJob; jobIndex++) {
                    def job = ctx._source.steps[stepIndex]['jobList'][jobIndex];
                    if (job.id == params.jobId) {
                        // if job's already finalized : noop
                        if (job.containsKey('finishedAt')) {
                            ctx.op = 'noop';
                            break;
                        }
                        ctx._source.steps[stepIndex]['jobList'][jobIndex].putAll(jobHash);
                        break;
                    }
                }
            }
        }], doing nothing...
.2018-02-14 17:38:54 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:54 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:54 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:54 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:38:54 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:54 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:54 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:54 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:55 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:38:56 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:56 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:56 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:56 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:56 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serie.json
2018-02-14 17:38:56 [INFO] Setting consumer group to manager
2018-02-14 17:38:56 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:56 [INFO] 🚀 Starting workflow test0.160652001518629936
2018-02-14 17:38:56 [INFO] Nb job(s) to run for foo : 1
2018-02-14 17:38:56 [INFO] Ask job #0 for test0.160652001518629936 : foo
[2018-02-14 17:38:56,255] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:setData cxid:0x180 zxid:0xce txntype:-1 reqpath:n/a Error Path:/config/topics/test-disturb-test-foo-step Error:KeeperErrorCode = NoNode for /config/topics/test-disturb-test-foo-step (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:56,258] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x181 zxid:0xcf txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:56,261] INFO Topic creation {"version":1,"partitions":{"0":[0]}} (kafka.admin.AdminUtils$)
[2018-02-14 17:38:56,263] INFO [KafkaApi-0] Auto creation of topic test-disturb-test-foo-step with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
[2018-02-14 17:38:56,272] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x189 zxid:0xd2 txntype:-1 reqpath:n/a Error Path:/brokers/topics/test-disturb-test-foo-step/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/test-disturb-test-foo-step/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:56,275] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x18a zxid:0xd3 txntype:-1 reqpath:n/a Error Path:/brokers/topics/test-disturb-test-foo-step/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/test-disturb-test-foo-step/partitions (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:38:56,286] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions test-disturb-test-foo-step-0 (kafka.server.ReplicaFetcherManager)
[2018-02-14 17:38:56,291] INFO Loading producer state from offset 0 for partition test-disturb-test-foo-step-0 with message format version 2 (kafka.log.Log)
[2018-02-14 17:38:56,291] INFO Completed load of log test-disturb-test-foo-step-0 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:38:56,293] INFO Created log for partition [test-disturb-test-foo-step,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:38:56,293] INFO [Partition test-disturb-test-foo-step-0 broker=0] No checkpointed highwatermark is found for partition test-disturb-test-foo-step-0 (kafka.cluster.Partition)
[2018-02-14 17:38:56,293] INFO Replica loaded for partition test-disturb-test-foo-step-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:38:56,294] INFO [Partition test-disturb-test-foo-step-0 broker=0] test-disturb-test-foo-step-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
.2018-02-14 17:38:56 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:56 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:56 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
[2018-02-14 17:38:58,169] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: test-disturb-test-foo-step-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
2018-02-14 17:38:58 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serie.json
2018-02-14 17:38:58 [INFO] Setting consumer group to manager
2018-02-14 17:38:58 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:58 [INFO] 🚀 Starting workflow test0.303980001518629938
2018-02-14 17:38:58 [INFO] Nb job(s) to run for foo : 1
2018-02-14 17:38:58 [INFO] Ask job #0 for test0.303980001518629938 : foo
2018-02-14 17:38:58 [INFO] 🚀 Starting workflow test0.303980001518629938
2018-02-14 17:38:58 [ERROR] Failed to start workflow : Vpg\Disturb\Workflow\ManagerService::init : Failed to init workflow 'test0.303980001518629938' : existing context
.2018-02-14 17:38:58 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:58 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:58 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:38:58 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:58 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:58 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:59 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serieWrongClientClass.json
2018-02-14 17:38:59 [INFO] Setting consumer group to manager
2018-02-14 17:38:59 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:38:59 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:59 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:59 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:59 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serie.json
2018-02-14 17:38:59 [INFO] Setting consumer group to manager
2018-02-14 17:38:59 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:38:59 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:59 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:59 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:38:59 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serie.json
2018-02-14 17:38:59 [INFO] Setting consumer group to manager
2018-02-14 17:38:59 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:39:02 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:39:03 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:39:03 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:39:03 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/withoutJob.json
2018-02-14 17:39:03 [INFO] Setting consumer group to manager
2018-02-14 17:39:03 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:39:03 [INFO] 🚀 Starting workflow test0.233618001518629943
2018-02-14 17:39:03 [INFO] Nb job(s) to run for foo : 1
2018-02-14 17:39:03 [INFO] Ask job #0 for test0.233618001518629943 : foo
2018-02-14 17:39:03 [INFO] Step foo ack SUCCESS
2018-02-14 17:39:03 [INFO] Id test0.233618001518629943 is 'STARTED'
2018-02-14 17:39:03 [INFO] Workflow test0.233618001518629943 - Current Step status : SUCCESS
2018-02-14 17:39:03 [INFO] Nb job(s) to run for noJob : 0
2018-02-14 17:39:03 [WARNING] No job to run for noJob
2018-02-14 17:39:03 [WARNING] Current step skipped
2018-02-14 17:39:03 [INFO] Nb job(s) to run for bar : 2
2018-02-14 17:39:03 [INFO] Ask job #0 for test0.233618001518629943 : bar
[2018-02-14 17:39:03,519] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:setData cxid:0x192 zxid:0xd7 txntype:-1 reqpath:n/a Error Path:/config/topics/test-disturb-test-bar-step Error:KeeperErrorCode = NoNode for /config/topics/test-disturb-test-bar-step (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:39:03,525] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x193 zxid:0xd8 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:39:03,528] INFO Topic creation {"version":1,"partitions":{"0":[0]}} (kafka.admin.AdminUtils$)
[2018-02-14 17:39:03,529] INFO [KafkaApi-0] Auto creation of topic test-disturb-test-bar-step with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
[2018-02-14 17:39:03,535] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x19b zxid:0xdb txntype:-1 reqpath:n/a Error Path:/brokers/topics/test-disturb-test-bar-step/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/test-disturb-test-bar-step/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:39:03,539] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x19c zxid:0xdc txntype:-1 reqpath:n/a Error Path:/brokers/topics/test-disturb-test-bar-step/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/test-disturb-test-bar-step/partitions (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:39:03,552] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions test-disturb-test-bar-step-0 (kafka.server.ReplicaFetcherManager)
[2018-02-14 17:39:03,559] INFO Loading producer state from offset 0 for partition test-disturb-test-bar-step-0 with message format version 2 (kafka.log.Log)
[2018-02-14 17:39:03,561] INFO Completed load of log test-disturb-test-bar-step-0 with 1 log segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
[2018-02-14 17:39:03,561] INFO Created log for partition [test-disturb-test-bar-step,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:39:03,562] INFO [Partition test-disturb-test-bar-step-0 broker=0] No checkpointed highwatermark is found for partition test-disturb-test-bar-step-0 (kafka.cluster.Partition)
[2018-02-14 17:39:03,562] INFO Replica loaded for partition test-disturb-test-bar-step-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:39:03,562] INFO [Partition test-disturb-test-bar-step-0 broker=0] test-disturb-test-bar-step-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
2018-02-14 17:39:03 [INFO] Ask job #1 for test0.233618001518629943 : bar
2018-02-14 17:39:03 [INFO] 🎉 Workflow test0.233618001518629943 is now finished in success
2018-02-14 17:39:03 [INFO] Step bar ack SUCCESS
2018-02-14 17:39:03 [INFO] Id test0.233618001518629943 is 'SUCCESS'
2018-02-14 17:39:03 [INFO] Workflow test0.233618001518629943 - Current Step status : RUNNING
2018-02-14 17:39:03 [INFO] Step bar ack SUCCESS
2018-02-14 17:39:03 [INFO] Id test0.233618001518629943 is 'SUCCESS'
2018-02-14 17:39:03 [INFO] Workflow test0.233618001518629943 - Current Step status : SUCCESS
2018-02-14 17:39:03 [INFO] Nb job(s) to run for boo : 1
2018-02-14 17:39:03 [INFO] Ask job #0 for test0.233618001518629943 : boo
2018-02-14 17:39:03 [INFO] Nb job(s) to run for noJobParallelized : 0
[2018-02-14 17:39:03,768] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:setData cxid:0x1a4 zxid:0xe0 txntype:-1 reqpath:n/a Error Path:/config/topics/test-disturb-test-boo-step Error:KeeperErrorCode = NoNode for /config/topics/test-disturb-test-boo-step (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:39:03,771] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x1a5 zxid:0xe1 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:39:03,774] INFO Topic creation {"version":1,"partitions":{"0":[0]}} (kafka.admin.AdminUtils$)
[2018-02-14 17:39:03,781] INFO [KafkaApi-0] Auto creation of topic test-disturb-test-boo-step with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
[2018-02-14 17:39:03,789] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x1ad zxid:0xe4 txntype:-1 reqpath:n/a Error Path:/brokers/topics/test-disturb-test-boo-step/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/test-disturb-test-boo-step/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:39:03,791] INFO Got user-level KeeperException when processing sessionid:0x16195641bf40000 type:create cxid:0x1ae zxid:0xe5 txntype:-1 reqpath:n/a Error Path:/brokers/topics/test-disturb-test-boo-step/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/test-disturb-test-boo-step/partitions (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:39:03,799] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions test-disturb-test-boo-step-0 (kafka.server.ReplicaFetcherManager)
[2018-02-14 17:39:03,806] INFO Loading producer state from offset 0 for partition test-disturb-test-boo-step-0 with message format version 2 (kafka.log.Log)
[2018-02-14 17:39:03,807] INFO Completed load of log test-disturb-test-boo-step-0 with 1 log segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
[2018-02-14 17:39:03,808] INFO Created log for partition [test-disturb-test-boo-step,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:39:03,809] INFO [Partition test-disturb-test-boo-step-0 broker=0] No checkpointed highwatermark is found for partition test-disturb-test-boo-step-0 (kafka.cluster.Partition)
[2018-02-14 17:39:03,809] INFO Replica loaded for partition test-disturb-test-boo-step-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:39:03,809] INFO [Partition test-disturb-test-boo-step-0 broker=0] test-disturb-test-boo-step-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
2018-02-14 17:39:03 [WARNING] No job to run for noJobParallelized
2018-02-14 17:39:03 [INFO] 🎉 Workflow test0.233618001518629943 is now finished in success
2018-02-14 17:39:03 [INFO] Step boo ack SUCCESS
2018-02-14 17:39:03 [INFO] Id test0.233618001518629943 is 'SUCCESS'
2018-02-14 17:39:03 [INFO] Workflow test0.233618001518629943 - Current Step status : SUCCESS
2018-02-14 17:39:03 [INFO] Nb job(s) to run for noJobParallelizedBis : 0
2018-02-14 17:39:04 [WARNING] No job to run for noJobParallelizedBis
2018-02-14 17:39:04 [INFO] Nb job(s) to run for noJobParallelizedTris : 0
2018-02-14 17:39:04 [WARNING] No job to run for noJobParallelizedTris
2018-02-14 17:39:04 [WARNING] Current step skipped
2018-02-14 17:39:04 [INFO] 🎉 Workflow test0.233618001518629943 is now finished in success
.......    62 / 62 (100%)

Time: 17.67 seconds, Memory: 14.00MB

OK (62 tests, 125 assertions)

Generating code coverage report in Clover XML format ... done

Generating code coverage report in HTML format ... done
[2018-02-14 17:39:05,228] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: test-disturb-test-bar-step-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
[2018-02-14 17:39:05,240] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: test-disturb-test-boo-step-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)

@coveralls
Copy link

coveralls commented Feb 14, 2018

Coverage Status

Coverage decreased (-0.9%) to 85.317% when pulling 15c51e9 on beta_starter into 49a2b75 on beta.

@TravisBuddy
Copy link

Travis tests have failed

Hey mbrenguier,
Please read the following log in order to understand the failure reason.
It'll be awesome if you fix what's wrong and commit the changes.

1st Build

vendor/bin/phpcs --standard=./phpcs.xml ./Library/
FILE: /home/travis/build/vpg/disturb/Library/Client/DisturbStarter.php
----------------------------------------------------------------------
FOUND 6 ERRORS AFFECTING 6 LINES
----------------------------------------------------------------------
  7 | ERROR | [ ] Missing doc comment for class DisturbStarter
 22 | ERROR | [x] Expected 2 spaces after parameter name; 3 found
 23 | ERROR | [x] Expected 1 spaces after parameter name; 2 found
 24 | ERROR | [x] Expected 5 spaces after parameter name; 6 found
 25 | ERROR | [x] Expected 3 spaces after parameter name; 4 found
 27 | ERROR | [ ] Missing @return tag in function comment
----------------------------------------------------------------------
PHPCBF CAN FIX THE 4 MARKED SNIFF VIOLATIONS AUTOMATICALLY
----------------------------------------------------------------------

Time: 1.38 secs; Memory: 10Mb

vendor/phpunit/phpunit/phpunit -c Tests/phpunit.xml
PHPUnit 6.5.6 by Sebastian Bergmann and contributors.

Runtime:       PHP 7.1.11 with Xdebug 2.5.5
Configuration: /home/travis/build/vpg/disturb/Tests/phpunit.xml

2018-02-14 17:41:09 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:09 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:09 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.....2018-02-14 17:41:09 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:09 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
......2018-02-14 17:41:09 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_monitoring
...2018-02-14 17:41:09 [INFO] Connecting to Elastic "https:\/\/badhost" on disturb_monitoring
2018-02-14 17:41:09 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_monitoring
......2018-02-14 17:41:10 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_monitoring
...........2018-02-14 17:41:10 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_monitoring
.2018-02-14 17:41:11 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_monitoring
.2018-02-14 17:41:11 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_monitoring
.2018-02-14 17:41:11 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:41:11 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serieWrongClientClass.json
2018-02-14 17:41:11 [INFO] Setting consumer group to badfoo
.2018-02-14 17:41:11 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serie.json
2018-02-14 17:41:11 [INFO] Setting consumer group to manager
2018-02-14 17:41:11 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:11 [INFO] 🚀 Starting workflow test0.619054001518630071
[2018-02-14 17:41:11,694] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:setData cxid:0x61 zxid:0x29 txntype:-1 reqpath:n/a Error Path:/config/topics/__consumer_offsets Error:KeeperErrorCode = NoNode for /config/topics/__consumer_offsets (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:11,697] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x62 zxid:0x2a txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:11,705] INFO Topic creation {"version":1,"partitions":{"45":[0],"34":[0],"12":[0],"8":[0],"19":[0],"23":[0],"4":[0],"40":[0],"15":[0],"11":[0],"9":[0],"44":[0],"33":[0],"22":[0],"26":[0],"37":[0],"13":[0],"46":[0],"24":[0],"35":[0],"16":[0],"5":[0],"10":[0],"48":[0],"21":[0],"43":[0],"32":[0],"49":[0],"6":[0],"36":[0],"1":[0],"39":[0],"17":[0],"25":[0],"14":[0],"47":[0],"31":[0],"42":[0],"0":[0],"20":[0],"27":[0],"2":[0],"38":[0],"18":[0],"30":[0],"7":[0],"29":[0],"41":[0],"3":[0],"28":[0]}} (kafka.admin.AdminUtils$)
[2018-02-14 17:41:11,726] INFO [KafkaApi-0] Auto creation of topic __consumer_offsets with 50 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
2018-02-14 17:41:11 [INFO] Nb job(s) to run for foo : 1
[2018-02-14 17:41:11,920] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x9b zxid:0x2d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/19 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/19 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:11,932] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x9c zxid:0x2e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:11,941] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xa0 zxid:0x32 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/30 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/30 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:11,948] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xa3 zxid:0x35 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/47 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/47 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:11,955] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xa6 zxid:0x38 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/29 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/29 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:11,962] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xa9 zxid:0x3b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/41 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/41 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:11,969] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xac zxid:0x3e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/39 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/39 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:11,975] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xaf zxid:0x41 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/10 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/10 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:11,980] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xb2 zxid:0x44 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/17 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/17 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:11,989] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xb5 zxid:0x47 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/14 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/14 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:11,998] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xb8 zxid:0x4a txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/40 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/40 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,006] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xbb zxid:0x4d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/18 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/18 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,012] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xbe zxid:0x50 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/26 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/26 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,018] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xc1 zxid:0x53 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,024] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xc4 zxid:0x56 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/24 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/24 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,032] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xc7 zxid:0x59 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/33 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/33 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,039] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xca zxid:0x5c txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/20 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/20 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,045] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xcd zxid:0x5f txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/21 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/21 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,054] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xd0 zxid:0x62 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/3 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/3 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,069] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xd3 zxid:0x65 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/5 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/5 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,079] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xd6 zxid:0x68 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/22 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/22 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,086] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xd9 zxid:0x6b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/12 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/12 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,094] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xdc zxid:0x6e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/8 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/8 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,101] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xdf zxid:0x71 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/23 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/23 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,107] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xe2 zxid:0x74 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/15 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/15 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,115] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xe5 zxid:0x77 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/48 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/48 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,121] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xe8 zxid:0x7a txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/11 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/11 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,127] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xeb zxid:0x7d txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/13 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/13 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,140] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xee zxid:0x80 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/49 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/49 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,149] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xf1 zxid:0x83 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/6 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/6 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,158] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xf4 zxid:0x86 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/28 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/28 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,165] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xf7 zxid:0x89 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/4 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/4 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,172] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xfa zxid:0x8c txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/37 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/37 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,178] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0xfd zxid:0x8f txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/31 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/31 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,187] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x100 zxid:0x92 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/44 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/44 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,196] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x103 zxid:0x95 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/42 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/42 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,211] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x106 zxid:0x98 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/34 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/34 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,218] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x109 zxid:0x9b txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/46 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/46 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,230] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x10c zxid:0x9e txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/25 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/25 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,238] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x10f zxid:0xa1 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/45 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/45 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,244] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x112 zxid:0xa4 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/27 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/27 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,253] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x115 zxid:0xa7 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/32 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/32 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,259] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x118 zxid:0xaa txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/43 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/43 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,265] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x11b zxid:0xad txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/36 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/36 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,274] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x11e zxid:0xb0 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/35 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/35 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,282] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x121 zxid:0xb3 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/7 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/7 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,290] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x124 zxid:0xb6 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/9 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/9 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,305] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x127 zxid:0xb9 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/38 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/38 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,312] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x12a zxid:0xbc txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/1 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/1 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,318] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x12d zxid:0xbf txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/16 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/16 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,325] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x130 zxid:0xc2 txntype:-1 reqpath:n/a Error Path:/brokers/topics/__consumer_offsets/partitions/2 Error:KeeperErrorCode = NoNode for /brokers/topics/__consumer_offsets/partitions/2 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:12,543] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-38,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-13,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.server.ReplicaFetcherManager)
[2018-02-14 17:41:12,562] INFO Loading producer state from offset 0 for partition __consumer_offsets-0 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,564] INFO Completed load of log __consumer_offsets-0 with 1 log segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2018-02-14 17:41:12,565] INFO Created log for partition [__consumer_offsets,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,571] INFO [Partition __consumer_offsets-0 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition)
[2018-02-14 17:41:12,571] INFO Replica loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,572] INFO [Partition __consumer_offsets-0 broker=0] __consumer_offsets-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,588] INFO Loading producer state from offset 0 for partition __consumer_offsets-29 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,589] INFO Completed load of log __consumer_offsets-29 with 1 log segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
[2018-02-14 17:41:12,595] INFO Created log for partition [__consumer_offsets,29] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,610] INFO [Partition __consumer_offsets-29 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition)
[2018-02-14 17:41:12,610] INFO Replica loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,611] INFO [Partition __consumer_offsets-29 broker=0] __consumer_offsets-29 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,619] INFO Loading producer state from offset 0 for partition __consumer_offsets-48 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,622] INFO Completed load of log __consumer_offsets-48 with 1 log segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2018-02-14 17:41:12,625] INFO Created log for partition [__consumer_offsets,48] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,631] INFO [Partition __consumer_offsets-48 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition)
[2018-02-14 17:41:12,633] INFO Replica loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,634] INFO [Partition __consumer_offsets-48 broker=0] __consumer_offsets-48 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,649] INFO Loading producer state from offset 0 for partition __consumer_offsets-10 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,650] INFO Completed load of log __consumer_offsets-10 with 1 log segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2018-02-14 17:41:12,651] INFO Created log for partition [__consumer_offsets,10] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,651] INFO [Partition __consumer_offsets-10 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition)
[2018-02-14 17:41:12,673] INFO Replica loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,674] INFO [Partition __consumer_offsets-10 broker=0] __consumer_offsets-10 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,679] INFO Loading producer state from offset 0 for partition __consumer_offsets-45 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,692] INFO Completed load of log __consumer_offsets-45 with 1 log segments, log start offset 0 and log end offset 0 in 14 ms (kafka.log.Log)
[2018-02-14 17:41:12,694] INFO Created log for partition [__consumer_offsets,45] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,694] INFO [Partition __consumer_offsets-45 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition)
[2018-02-14 17:41:12,694] INFO Replica loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,695] INFO [Partition __consumer_offsets-45 broker=0] __consumer_offsets-45 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,719] INFO Loading producer state from offset 0 for partition __consumer_offsets-26 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,724] INFO Completed load of log __consumer_offsets-26 with 1 log segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2018-02-14 17:41:12,727] INFO Created log for partition [__consumer_offsets,26] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,731] INFO [Partition __consumer_offsets-26 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition)
[2018-02-14 17:41:12,731] INFO Replica loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,731] INFO [Partition __consumer_offsets-26 broker=0] __consumer_offsets-26 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,748] INFO Loading producer state from offset 0 for partition __consumer_offsets-7 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,750] INFO Completed load of log __consumer_offsets-7 with 1 log segments, log start offset 0 and log end offset 0 in 15 ms (kafka.log.Log)
[2018-02-14 17:41:12,753] INFO Created log for partition [__consumer_offsets,7] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,754] INFO [Partition __consumer_offsets-7 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition)
[2018-02-14 17:41:12,754] INFO Replica loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,755] INFO [Partition __consumer_offsets-7 broker=0] __consumer_offsets-7 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,762] INFO Loading producer state from offset 0 for partition __consumer_offsets-42 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,762] INFO Completed load of log __consumer_offsets-42 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:41:12,778] INFO Created log for partition [__consumer_offsets,42] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,779] INFO [Partition __consumer_offsets-42 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition)
[2018-02-14 17:41:12,779] INFO Replica loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,779] INFO [Partition __consumer_offsets-42 broker=0] __consumer_offsets-42 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,797] INFO Loading producer state from offset 0 for partition __consumer_offsets-4 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,801] INFO Completed load of log __consumer_offsets-4 with 1 log segments, log start offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
[2018-02-14 17:41:12,804] INFO Created log for partition [__consumer_offsets,4] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,807] INFO [Partition __consumer_offsets-4 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition)
[2018-02-14 17:41:12,807] INFO Replica loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,810] INFO [Partition __consumer_offsets-4 broker=0] __consumer_offsets-4 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,822] INFO Loading producer state from offset 0 for partition __consumer_offsets-23 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,825] INFO Completed load of log __consumer_offsets-23 with 1 log segments, log start offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
[2018-02-14 17:41:12,828] INFO Created log for partition [__consumer_offsets,23] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,830] INFO [Partition __consumer_offsets-23 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition)
[2018-02-14 17:41:12,831] INFO Replica loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,831] INFO [Partition __consumer_offsets-23 broker=0] __consumer_offsets-23 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,841] INFO Loading producer state from offset 0 for partition __consumer_offsets-1 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,845] INFO Completed load of log __consumer_offsets-1 with 1 log segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
[2018-02-14 17:41:12,846] INFO Created log for partition [__consumer_offsets,1] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,846] INFO [Partition __consumer_offsets-1 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,846] INFO Replica loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,851] INFO [Partition __consumer_offsets-1 broker=0] __consumer_offsets-1 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,868] INFO Loading producer state from offset 0 for partition __consumer_offsets-20 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,870] INFO Completed load of log __consumer_offsets-20 with 1 log segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2018-02-14 17:41:12,872] INFO Created log for partition [__consumer_offsets,20] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,873] INFO [Partition __consumer_offsets-20 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition)
[2018-02-14 17:41:12,873] INFO Replica loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,873] INFO [Partition __consumer_offsets-20 broker=0] __consumer_offsets-20 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,881] INFO Loading producer state from offset 0 for partition __consumer_offsets-39 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,886] INFO Completed load of log __consumer_offsets-39 with 1 log segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
[2018-02-14 17:41:12,896] INFO Created log for partition [__consumer_offsets,39] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,898] INFO [Partition __consumer_offsets-39 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition)
[2018-02-14 17:41:12,898] INFO Replica loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,899] INFO [Partition __consumer_offsets-39 broker=0] __consumer_offsets-39 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,911] INFO Loading producer state from offset 0 for partition __consumer_offsets-17 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,914] INFO Completed load of log __consumer_offsets-17 with 1 log segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
[2018-02-14 17:41:12,919] INFO Created log for partition [__consumer_offsets,17] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,919] INFO [Partition __consumer_offsets-17 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition)
[2018-02-14 17:41:12,919] INFO Replica loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,921] INFO [Partition __consumer_offsets-17 broker=0] __consumer_offsets-17 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,933] INFO Loading producer state from offset 0 for partition __consumer_offsets-36 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,937] INFO Completed load of log __consumer_offsets-36 with 1 log segments, log start offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
[2018-02-14 17:41:12,941] INFO Created log for partition [__consumer_offsets,36] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,942] INFO [Partition __consumer_offsets-36 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition)
[2018-02-14 17:41:12,942] INFO Replica loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,943] INFO [Partition __consumer_offsets-36 broker=0] __consumer_offsets-36 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,952] INFO Loading producer state from offset 0 for partition __consumer_offsets-14 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,953] INFO Completed load of log __consumer_offsets-14 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:41:12,954] INFO Created log for partition [__consumer_offsets,14] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,954] INFO [Partition __consumer_offsets-14 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition)
[2018-02-14 17:41:12,954] INFO Replica loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,960] INFO [Partition __consumer_offsets-14 broker=0] __consumer_offsets-14 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,969] INFO Loading producer state from offset 0 for partition __consumer_offsets-33 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,972] INFO Completed load of log __consumer_offsets-33 with 1 log segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2018-02-14 17:41:12,975] INFO Created log for partition [__consumer_offsets,33] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,977] INFO [Partition __consumer_offsets-33 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition)
[2018-02-14 17:41:12,977] INFO Replica loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,977] INFO [Partition __consumer_offsets-33 broker=0] __consumer_offsets-33 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:12,990] INFO Loading producer state from offset 0 for partition __consumer_offsets-49 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:12,990] INFO Completed load of log __consumer_offsets-49 with 1 log segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
[2018-02-14 17:41:12,994] INFO Created log for partition [__consumer_offsets,49] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:12,995] INFO [Partition __consumer_offsets-49 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition)
[2018-02-14 17:41:12,996] INFO Replica loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:12,996] INFO [Partition __consumer_offsets-49 broker=0] __consumer_offsets-49 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,001] INFO Loading producer state from offset 0 for partition __consumer_offsets-11 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,002] INFO Completed load of log __consumer_offsets-11 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:41:13,003] INFO Created log for partition [__consumer_offsets,11] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,003] INFO [Partition __consumer_offsets-11 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition)
[2018-02-14 17:41:13,003] INFO Replica loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,003] INFO [Partition __consumer_offsets-11 broker=0] __consumer_offsets-11 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,024] INFO Loading producer state from offset 0 for partition __consumer_offsets-30 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,026] INFO Completed load of log __consumer_offsets-30 with 1 log segments, log start offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
[2018-02-14 17:41:13,030] INFO Created log for partition [__consumer_offsets,30] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,032] INFO [Partition __consumer_offsets-30 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition)
[2018-02-14 17:41:13,032] INFO Replica loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,032] INFO [Partition __consumer_offsets-30 broker=0] __consumer_offsets-30 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,042] INFO Loading producer state from offset 0 for partition __consumer_offsets-46 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,043] INFO Completed load of log __consumer_offsets-46 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:41:13,044] INFO Created log for partition [__consumer_offsets,46] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,054] INFO [Partition __consumer_offsets-46 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition)
[2018-02-14 17:41:13,054] INFO Replica loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,055] INFO [Partition __consumer_offsets-46 broker=0] __consumer_offsets-46 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,066] INFO Loading producer state from offset 0 for partition __consumer_offsets-27 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,077] INFO Completed load of log __consumer_offsets-27 with 1 log segments, log start offset 0 and log end offset 0 in 14 ms (kafka.log.Log)
[2018-02-14 17:41:13,079] INFO Created log for partition [__consumer_offsets,27] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,081] INFO [Partition __consumer_offsets-27 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition)
[2018-02-14 17:41:13,081] INFO Replica loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,082] INFO [Partition __consumer_offsets-27 broker=0] __consumer_offsets-27 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,147] INFO Loading producer state from offset 0 for partition __consumer_offsets-8 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,148] INFO Completed load of log __consumer_offsets-8 with 1 log segments, log start offset 0 and log end offset 0 in 44 ms (kafka.log.Log)
[2018-02-14 17:41:13,160] INFO Created log for partition [__consumer_offsets,8] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,176] INFO [Partition __consumer_offsets-8 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition)
[2018-02-14 17:41:13,177] INFO Replica loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,178] INFO [Partition __consumer_offsets-8 broker=0] __consumer_offsets-8 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,194] INFO Loading producer state from offset 0 for partition __consumer_offsets-24 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,197] INFO Completed load of log __consumer_offsets-24 with 1 log segments, log start offset 0 and log end offset 0 in 12 ms (kafka.log.Log)
[2018-02-14 17:41:13,198] INFO Created log for partition [__consumer_offsets,24] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,198] INFO [Partition __consumer_offsets-24 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition)
[2018-02-14 17:41:13,198] INFO Replica loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,198] INFO [Partition __consumer_offsets-24 broker=0] __consumer_offsets-24 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,228] INFO Loading producer state from offset 0 for partition __consumer_offsets-43 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,232] INFO Completed load of log __consumer_offsets-43 with 1 log segments, log start offset 0 and log end offset 0 in 11 ms (kafka.log.Log)
[2018-02-14 17:41:13,235] INFO Created log for partition [__consumer_offsets,43] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,235] INFO [Partition __consumer_offsets-43 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition)
[2018-02-14 17:41:13,235] INFO Replica loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,235] INFO [Partition __consumer_offsets-43 broker=0] __consumer_offsets-43 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,253] INFO Loading producer state from offset 0 for partition __consumer_offsets-5 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,253] INFO Completed load of log __consumer_offsets-5 with 1 log segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
[2018-02-14 17:41:13,254] INFO Created log for partition [__consumer_offsets,5] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,254] INFO [Partition __consumer_offsets-5 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition)
[2018-02-14 17:41:13,254] INFO Replica loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,259] INFO [Partition __consumer_offsets-5 broker=0] __consumer_offsets-5 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,263] INFO Loading producer state from offset 0 for partition __consumer_offsets-21 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,272] INFO Completed load of log __consumer_offsets-21 with 1 log segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2018-02-14 17:41:13,273] INFO Created log for partition [__consumer_offsets,21] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,273] INFO [Partition __consumer_offsets-21 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition)
[2018-02-14 17:41:13,273] INFO Replica loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,273] INFO [Partition __consumer_offsets-21 broker=0] __consumer_offsets-21 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,287] INFO Loading producer state from offset 0 for partition __consumer_offsets-2 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,290] INFO Completed load of log __consumer_offsets-2 with 1 log segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2018-02-14 17:41:13,293] INFO Created log for partition [__consumer_offsets,2] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,300] INFO [Partition __consumer_offsets-2 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition)
[2018-02-14 17:41:13,300] INFO Replica loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,302] INFO [Partition __consumer_offsets-2 broker=0] __consumer_offsets-2 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,307] INFO Loading producer state from offset 0 for partition __consumer_offsets-40 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,308] INFO Completed load of log __consumer_offsets-40 with 1 log segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2018-02-14 17:41:13,308] INFO Created log for partition [__consumer_offsets,40] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,309] INFO [Partition __consumer_offsets-40 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition)
[2018-02-14 17:41:13,309] INFO Replica loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,309] INFO [Partition __consumer_offsets-40 broker=0] __consumer_offsets-40 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,328] INFO Loading producer state from offset 0 for partition __consumer_offsets-37 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,329] INFO Completed load of log __consumer_offsets-37 with 1 log segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2018-02-14 17:41:13,329] INFO Created log for partition [__consumer_offsets,37] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,330] INFO [Partition __consumer_offsets-37 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition)
[2018-02-14 17:41:13,330] INFO Replica loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,330] INFO [Partition __consumer_offsets-37 broker=0] __consumer_offsets-37 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,353] INFO Loading producer state from offset 0 for partition __consumer_offsets-18 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,355] INFO Completed load of log __consumer_offsets-18 with 1 log segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
[2018-02-14 17:41:13,357] INFO Created log for partition [__consumer_offsets,18] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,357] INFO [Partition __consumer_offsets-18 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition)
[2018-02-14 17:41:13,357] INFO Replica loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,357] INFO [Partition __consumer_offsets-18 broker=0] __consumer_offsets-18 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,374] INFO Loading producer state from offset 0 for partition __consumer_offsets-34 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,376] INFO Completed load of log __consumer_offsets-34 with 1 log segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
[2018-02-14 17:41:13,382] INFO Created log for partition [__consumer_offsets,34] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,383] INFO [Partition __consumer_offsets-34 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition)
[2018-02-14 17:41:13,383] INFO Replica loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,383] INFO [Partition __consumer_offsets-34 broker=0] __consumer_offsets-34 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,391] INFO Loading producer state from offset 0 for partition __consumer_offsets-15 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,391] INFO Completed load of log __consumer_offsets-15 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:41:13,401] INFO Created log for partition [__consumer_offsets,15] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,403] INFO [Partition __consumer_offsets-15 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition)
[2018-02-14 17:41:13,403] INFO Replica loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,404] INFO [Partition __consumer_offsets-15 broker=0] __consumer_offsets-15 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,416] INFO Loading producer state from offset 0 for partition __consumer_offsets-12 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,419] INFO Completed load of log __consumer_offsets-12 with 1 log segments, log start offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
[2018-02-14 17:41:13,422] INFO Created log for partition [__consumer_offsets,12] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,425] INFO [Partition __consumer_offsets-12 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition)
[2018-02-14 17:41:13,425] INFO Replica loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,425] INFO [Partition __consumer_offsets-12 broker=0] __consumer_offsets-12 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,436] INFO Loading producer state from offset 0 for partition __consumer_offsets-31 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,437] INFO Completed load of log __consumer_offsets-31 with 1 log segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2018-02-14 17:41:13,440] INFO Created log for partition [__consumer_offsets,31] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,448] INFO [Partition __consumer_offsets-31 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition)
[2018-02-14 17:41:13,448] INFO Replica loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,449] INFO [Partition __consumer_offsets-31 broker=0] __consumer_offsets-31 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,461] INFO Loading producer state from offset 0 for partition __consumer_offsets-9 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,461] INFO Completed load of log __consumer_offsets-9 with 1 log segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2018-02-14 17:41:13,462] INFO Created log for partition [__consumer_offsets,9] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,475] INFO [Partition __consumer_offsets-9 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition)
[2018-02-14 17:41:13,476] INFO Replica loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,476] INFO [Partition __consumer_offsets-9 broker=0] __consumer_offsets-9 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,488] INFO Loading producer state from offset 0 for partition __consumer_offsets-47 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,489] INFO Completed load of log __consumer_offsets-47 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:41:13,490] INFO Created log for partition [__consumer_offsets,47] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,499] INFO [Partition __consumer_offsets-47 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition)
[2018-02-14 17:41:13,499] INFO Replica loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,500] INFO [Partition __consumer_offsets-47 broker=0] __consumer_offsets-47 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,510] INFO Loading producer state from offset 0 for partition __consumer_offsets-19 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,513] INFO Completed load of log __consumer_offsets-19 with 1 log segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
[2018-02-14 17:41:13,516] INFO Created log for partition [__consumer_offsets,19] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
2018-02-14 17:41:13 [INFO] Ask job #0 for test0.619054001518630071 : foo
[2018-02-14 17:41:13,543] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:setData cxid:0x15c zxid:0xc5 txntype:-1 reqpath:n/a Error Path:/config/topics/disturb-test-foo-step Error:KeeperErrorCode = NoNode for /config/topics/disturb-test-foo-step (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:13,543] INFO [Partition __consumer_offsets-19 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition)
[2018-02-14 17:41:13,543] INFO Replica loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,543] INFO [Partition __consumer_offsets-19 broker=0] __consumer_offsets-19 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,548] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x15e zxid:0xc6 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:13,551] INFO Topic creation {"version":1,"partitions":{"0":[0]}} (kafka.admin.AdminUtils$)
[2018-02-14 17:41:13,566] INFO [KafkaApi-0] Auto creation of topic disturb-test-foo-step with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
[2018-02-14 17:41:13,569] INFO Loading producer state from offset 0 for partition __consumer_offsets-28 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,571] INFO Completed load of log __consumer_offsets-28 with 1 log segments, log start offset 0 and log end offset 0 in 19 ms (kafka.log.Log)
[2018-02-14 17:41:13,573] INFO Created log for partition [__consumer_offsets,28] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,574] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x166 zxid:0xc9 txntype:-1 reqpath:n/a Error Path:/brokers/topics/disturb-test-foo-step/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/disturb-test-foo-step/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:13,576] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x167 zxid:0xca txntype:-1 reqpath:n/a Error Path:/brokers/topics/disturb-test-foo-step/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/disturb-test-foo-step/partitions (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:13,579] INFO [Partition __consumer_offsets-28 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition)
[2018-02-14 17:41:13,580] INFO Replica loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,580] INFO [Partition __consumer_offsets-28 broker=0] __consumer_offsets-28 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,593] INFO Loading producer state from offset 0 for partition __consumer_offsets-38 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,599] INFO Completed load of log __consumer_offsets-38 with 1 log segments, log start offset 0 and log end offset 0 in 10 ms (kafka.log.Log)
[2018-02-14 17:41:13,599] INFO Created log for partition [__consumer_offsets,38] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,617] INFO [Partition __consumer_offsets-38 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition)
[2018-02-14 17:41:13,617] INFO Replica loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,617] INFO [Partition __consumer_offsets-38 broker=0] __consumer_offsets-38 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,626] INFO Loading producer state from offset 0 for partition __consumer_offsets-35 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,631] INFO Completed load of log __consumer_offsets-35 with 1 log segments, log start offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
[2018-02-14 17:41:13,641] INFO Created log for partition [__consumer_offsets,35] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,644] INFO [Partition __consumer_offsets-35 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition)
[2018-02-14 17:41:13,644] INFO Replica loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,644] INFO [Partition __consumer_offsets-35 broker=0] __consumer_offsets-35 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,666] INFO Loading producer state from offset 0 for partition __consumer_offsets-44 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,668] INFO Completed load of log __consumer_offsets-44 with 1 log segments, log start offset 0 and log end offset 0 in 11 ms (kafka.log.Log)
[2018-02-14 17:41:13,672] INFO Created log for partition [__consumer_offsets,44] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,678] INFO [Partition __consumer_offsets-44 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition)
[2018-02-14 17:41:13,678] INFO Replica loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,678] INFO [Partition __consumer_offsets-44 broker=0] __consumer_offsets-44 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,691] INFO Loading producer state from offset 0 for partition __consumer_offsets-6 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,692] INFO Completed load of log __consumer_offsets-6 with 1 log segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2018-02-14 17:41:13,694] INFO Created log for partition [__consumer_offsets,6] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,696] INFO [Partition __consumer_offsets-6 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition)
[2018-02-14 17:41:13,696] INFO Replica loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,696] INFO [Partition __consumer_offsets-6 broker=0] __consumer_offsets-6 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,706] INFO Loading producer state from offset 0 for partition __consumer_offsets-25 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,707] INFO Completed load of log __consumer_offsets-25 with 1 log segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2018-02-14 17:41:13,710] INFO Created log for partition [__consumer_offsets,25] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,712] INFO [Partition __consumer_offsets-25 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition)
[2018-02-14 17:41:13,712] INFO Replica loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,713] INFO [Partition __consumer_offsets-25 broker=0] __consumer_offsets-25 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,723] INFO Loading producer state from offset 0 for partition __consumer_offsets-16 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,724] INFO Completed load of log __consumer_offsets-16 with 1 log segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2018-02-14 17:41:13,731] INFO Created log for partition [__consumer_offsets,16] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,738] INFO [Partition __consumer_offsets-16 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition)
[2018-02-14 17:41:13,738] INFO Replica loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,738] INFO [Partition __consumer_offsets-16 broker=0] __consumer_offsets-16 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,756] INFO Loading producer state from offset 0 for partition __consumer_offsets-22 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,757] INFO Completed load of log __consumer_offsets-22 with 1 log segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2018-02-14 17:41:13,758] INFO Created log for partition [__consumer_offsets,22] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
2018-02-14 17:41:13 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serie.json
2018-02-14 17:41:13 [INFO] Setting consumer group to foo
[2018-02-14 17:41:13,786] INFO [Partition __consumer_offsets-22 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition)
[2018-02-14 17:41:13,786] INFO Replica loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,786] INFO [Partition __consumer_offsets-22 broker=0] __consumer_offsets-22 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,792] INFO Loading producer state from offset 0 for partition __consumer_offsets-41 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,793] INFO Completed load of log __consumer_offsets-41 with 1 log segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2018-02-14 17:41:13,794] INFO Created log for partition [__consumer_offsets,41] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,794] INFO [Partition __consumer_offsets-41 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition)
[2018-02-14 17:41:13,794] INFO Replica loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,794] INFO [Partition __consumer_offsets-41 broker=0] __consumer_offsets-41 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,804] INFO Loading producer state from offset 0 for partition __consumer_offsets-32 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,804] INFO Completed load of log __consumer_offsets-32 with 1 log segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
2018-02-14 17:41:13 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
[2018-02-14 17:41:13,821] INFO Created log for partition [__consumer_offsets,32] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,824] INFO [Partition __consumer_offsets-32 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition)
[2018-02-14 17:41:13,824] INFO Replica loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,824] INFO [Partition __consumer_offsets-32 broker=0] __consumer_offsets-32 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,861] INFO Loading producer state from offset 0 for partition __consumer_offsets-3 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,862] INFO Completed load of log __consumer_offsets-3 with 1 log segments, log start offset 0 and log end offset 0 in 19 ms (kafka.log.Log)
[2018-02-14 17:41:13,862] INFO Created log for partition [__consumer_offsets,3] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,863] INFO [Partition __consumer_offsets-3 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition)
[2018-02-14 17:41:13,863] INFO Replica loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,863] INFO [Partition __consumer_offsets-3 broker=0] __consumer_offsets-3 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,876] INFO Loading producer state from offset 0 for partition __consumer_offsets-13 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,878] INFO Completed load of log __consumer_offsets-13 with 1 log segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
2018-02-14 17:41:13 [INFO] messageDto : {"id":"test0.619054001518630071","type":"STEP-CTRL","stepCode":"foo","jobId":"1","action":"start","payload":{"foo":"bar0"}}
[2018-02-14 17:41:13,885] INFO Created log for partition [__consumer_offsets,13] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,887] INFO [Partition __consumer_offsets-13 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition)
[2018-02-14 17:41:13,887] INFO Replica loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,888] INFO [Partition __consumer_offsets-13 broker=0] __consumer_offsets-13 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,891] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-22 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,896] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-25 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,896] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-28 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-31 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-34 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-37 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-40 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-43 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-46 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-49 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-41 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-1 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-4 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-7 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-10 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-13 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-16 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-19 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-2 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-5 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-8 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-11 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-14 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-17 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-20 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-23 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-26 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-29 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-32 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,897] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-35 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,916] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-38 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,916] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,916] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-3 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,917] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-6 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,917] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-9 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,917] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-12 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,917] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-15 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,917] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-18 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,917] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-21 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,917] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-24 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,917] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-27 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,917] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-30 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,918] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-33 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,918] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-36 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,918] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-39 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,918] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,918] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-45 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,919] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,944] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions disturb-test-foo-step-0 (kafka.server.ReplicaFetcherManager)
[2018-02-14 17:41:13,946] INFO Loading producer state from offset 0 for partition disturb-test-foo-step-0 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:13,947] INFO Completed load of log disturb-test-foo-step-0 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:41:13,947] INFO Created log for partition [disturb-test-foo-step,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:13,948] INFO [Partition disturb-test-foo-step-0 broker=0] No checkpointed highwatermark is found for partition disturb-test-foo-step-0 (kafka.cluster.Partition)
[2018-02-14 17:41:13,948] INFO Replica loaded for partition disturb-test-foo-step-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:13,948] INFO [Partition disturb-test-foo-step-0 broker=0] disturb-test-foo-step-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-02-14 17:41:13,952] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-22 in 37 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,964] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-25 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,964] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-28 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,964] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-31 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,964] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-34 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,965] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-37 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,965] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,965] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-43 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,965] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,965] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-49 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,965] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,966] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-44 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,966] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-47 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,966] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-1 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,985] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-4 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,987] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-7 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,989] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-10 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,994] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-13 in 2 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,994] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-16 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,995] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,995] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-2 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,995] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,996] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-8 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,998] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-11 in 2 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,999] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,999] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,999] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,999] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-23 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:13,999] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,000] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-29 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,005] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-32 in 5 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,005] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,005] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,005] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-0 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,010] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-3 in 5 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,017] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-6 in 6 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,018] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-9 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,018] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-12 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,019] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-15 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,019] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-18 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,019] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-21 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,019] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-24 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,026] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-27 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,028] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-30 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,028] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,030] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,031] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,032] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-42 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,033] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:14,034] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-02-14 17:41:15,441] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: disturb-test-manager-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
[2018-02-14 17:41:15,621] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: disturb-test-foo-step-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
...2018-02-14 17:41:15 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:15 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:15 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:15 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:41:16 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:16 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:16 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:16 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
[2018-02-14T17:41:16,236][WARN ][o.e.a.u.UpdateHelper     ] [gcTJ8a5] Used upsert operation [noop] for script [        def nbStep = ctx._source.steps.size();
        def jobHash = ['reservedBy':params.workerCode, 'executedOn':params.workerHostname];
        // loop over steps
        for (int stepIndex = 0; stepIndex < nbStep; stepIndex++) {
            def step = ctx._source.steps[stepIndex];
            // if its a parrallelized steps node, loop over each
            if (step instanceof List) {
                def nbParallelizedStep = step.size();
                for (int parallelizedStepIndex= 0; parallelizedStepIndex< nbParallelizedStep; parallelizedStepIndex++) {
                    // if the given step is found, look for the given job
                    if (step[parallelizedStepIndex].name == params.stepCode) {
                        def nbJob = step[parallelizedStepIndex]['jobList'].size();
                        for (int jobIndex = 0; jobIndex < nbJob; jobIndex++) {
                            def job = step[parallelizedStepIndex]['jobList'][jobIndex];
                            if (job.id == params.jobId) {
                                // if job's already reserved : noop
                                if (job.containsKey('reservedBy')) {
                                    ctx.op = 'noop';
                                    break;
                                }
                                ctx._source.steps[stepIndex][parallelizedStepIndex]['jobList'][jobIndex]
                                .putAll(jobHash);
                                break;
                            }
                         }
                        break;
                    }
                }
            } else if (step.name == params.stepCode) {
                def nbJob = step.jobList.size();
                for (int jobIndex = 0; jobIndex < nbJob; jobIndex++) {
                    def job = ctx._source.steps[stepIndex]['jobList'][jobIndex];
                    if (job.id == params.jobId) {
                        // if job's already reserved : noop
                        if (job.containsKey('reservedBy')) {
                            ctx.op = 'noop';
                            break;
                        }
                        ctx._source.steps[stepIndex]['jobList'][jobIndex].putAll(jobHash);
                        break;
                    }
                }
            }
        }], doing nothing...
.2018-02-14 17:41:16 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:16 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:16 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:16 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
[2018-02-14T17:41:16,552][WARN ][o.e.a.u.UpdateHelper     ] [gcTJ8a5] Used upsert operation [noop] for script [        int nbStep = ctx._source.steps.size();
        def jobHash = ['status':params.jobStatus, 'finishedAt':params.jobFinishedAt, 'result':params.jobResult];
        // loop over steps
        for (int stepIndex = 0; stepIndex < nbStep; stepIndex++) {
            def step = ctx._source.steps[stepIndex];
            // if its a parrallelized steps node, loop over each
            if (step instanceof List) {
                int nbParallelizedStep = step.size();
                for (int parallelizedStepIndex= 0; parallelizedStepIndex< nbParallelizedStep; parallelizedStepIndex++) {
                    // if the given step is found, look for the given job
                    if (step[parallelizedStepIndex].name == params.stepCode) {
                        def nbJob = step[parallelizedStepIndex]['jobList'].size();
                        for (int jobIndex = 0; jobIndex < nbJob; jobIndex++) {
                            def job = step[parallelizedStepIndex]['jobList'][jobIndex];
                            if (job.id == params.jobId) {
                                // if job's already finalized : noop
                                if (job.containsKey('finishedAt')) {
                                    ctx.op = 'noop';
                                    break;
                                }
                                ctx._source.steps[stepIndex][parallelizedStepIndex]['jobList'][jobIndex]
                                .putAll(jobHash);
                                break;
                            }
                         }
                        break;
                    }
                }
            } else if (step.name == params.stepCode) {
                int nbJob = step.jobList.size();
                for (int jobIndex = 0; jobIndex < nbJob; jobIndex++) {
                    def job = ctx._source.steps[stepIndex]['jobList'][jobIndex];
                    if (job.id == params.jobId) {
                        // if job's already finalized : noop
                        if (job.containsKey('finishedAt')) {
                            ctx.op = 'noop';
                            break;
                        }
                        ctx._source.steps[stepIndex]['jobList'][jobIndex].putAll(jobHash);
                        break;
                    }
                }
            }
        }], doing nothing...
.2018-02-14 17:41:16 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:16 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:16 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:16 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:17 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:18 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serie.json
2018-02-14 17:41:18 [INFO] Setting consumer group to manager
2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:18 [INFO] 🚀 Starting workflow test0.737219001518630078
2018-02-14 17:41:18 [INFO] Nb job(s) to run for foo : 1
2018-02-14 17:41:18 [INFO] Ask job #0 for test0.737219001518630078 : foo
[2018-02-14 17:41:18,823] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:setData cxid:0x183 zxid:0xce txntype:-1 reqpath:n/a Error Path:/config/topics/test-disturb-test-foo-step Error:KeeperErrorCode = NoNode for /config/topics/test-disturb-test-foo-step (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:18,825] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x184 zxid:0xcf txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:18,828] INFO Topic creation {"version":1,"partitions":{"0":[0]}} (kafka.admin.AdminUtils$)
[2018-02-14 17:41:18,831] INFO [KafkaApi-0] Auto creation of topic test-disturb-test-foo-step with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
[2018-02-14 17:41:18,836] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x18c zxid:0xd2 txntype:-1 reqpath:n/a Error Path:/brokers/topics/test-disturb-test-foo-step/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/test-disturb-test-foo-step/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:18,838] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x18d zxid:0xd3 txntype:-1 reqpath:n/a Error Path:/brokers/topics/test-disturb-test-foo-step/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/test-disturb-test-foo-step/partitions (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:18,845] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions test-disturb-test-foo-step-0 (kafka.server.ReplicaFetcherManager)
[2018-02-14 17:41:18,848] INFO Loading producer state from offset 0 for partition test-disturb-test-foo-step-0 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:18,848] INFO Completed load of log test-disturb-test-foo-step-0 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:41:18,850] INFO Created log for partition [test-disturb-test-foo-step,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:18,856] INFO [Partition test-disturb-test-foo-step-0 broker=0] No checkpointed highwatermark is found for partition test-disturb-test-foo-step-0 (kafka.cluster.Partition)
[2018-02-14 17:41:18,856] INFO Replica loaded for partition test-disturb-test-foo-step-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:18,856] INFO [Partition test-disturb-test-foo-step-0 broker=0] test-disturb-test-foo-step-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
.2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:18 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
[2018-02-14 17:41:20,724] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: test-disturb-test-foo-step-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
2018-02-14 17:41:20 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serie.json
2018-02-14 17:41:20 [INFO] Setting consumer group to manager
2018-02-14 17:41:20 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:20 [INFO] 🚀 Starting workflow test0.860833001518630080
2018-02-14 17:41:20 [INFO] Nb job(s) to run for foo : 1
2018-02-14 17:41:20 [INFO] Ask job #0 for test0.860833001518630080 : foo
2018-02-14 17:41:20 [INFO] 🚀 Starting workflow test0.860833001518630080
2018-02-14 17:41:20 [ERROR] Failed to start workflow : Vpg\Disturb\Workflow\ManagerService::init : Failed to init workflow 'test0.860833001518630080' : existing context
.2018-02-14 17:41:21 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:21 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:21 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:41:21 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:21 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:21 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:22 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serieWrongClientClass.json
2018-02-14 17:41:22 [INFO] Setting consumer group to manager
2018-02-14 17:41:22 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:41:22 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:22 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:22 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:22 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serie.json
2018-02-14 17:41:22 [INFO] Setting consumer group to manager
2018-02-14 17:41:22 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:41:22 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:22 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:22 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:22 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/serie.json
2018-02-14 17:41:22 [INFO] Setting consumer group to manager
2018-02-14 17:41:22 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
.2018-02-14 17:41:25 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:25 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:25 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:25 [INFO] Loading workflow config from /home/travis/build/vpg/disturb/Tests/Config/withoutJob.json
2018-02-14 17:41:25 [INFO] Setting consumer group to manager
2018-02-14 17:41:25 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-14 17:41:25 [INFO] 🚀 Starting workflow test0.835564001518630085
2018-02-14 17:41:25 [INFO] Nb job(s) to run for foo : 1
2018-02-14 17:41:25 [INFO] Ask job #0 for test0.835564001518630085 : foo
2018-02-14 17:41:25 [INFO] Step foo ack SUCCESS
2018-02-14 17:41:25 [INFO] Id test0.835564001518630085 is 'STARTED'
2018-02-14 17:41:25 [INFO] Workflow test0.835564001518630085 - Current Step status : SUCCESS
2018-02-14 17:41:26 [INFO] Nb job(s) to run for noJob : 0
2018-02-14 17:41:26 [WARNING] No job to run for noJob
2018-02-14 17:41:26 [WARNING] Current step skipped
2018-02-14 17:41:26 [INFO] Nb job(s) to run for bar : 2
2018-02-14 17:41:26 [INFO] Ask job #0 for test0.835564001518630085 : bar
[2018-02-14 17:41:26,130] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:setData cxid:0x195 zxid:0xd7 txntype:-1 reqpath:n/a Error Path:/config/topics/test-disturb-test-bar-step Error:KeeperErrorCode = NoNode for /config/topics/test-disturb-test-bar-step (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:26,134] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x196 zxid:0xd8 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:26,136] INFO Topic creation {"version":1,"partitions":{"0":[0]}} (kafka.admin.AdminUtils$)
[2018-02-14 17:41:26,140] INFO [KafkaApi-0] Auto creation of topic test-disturb-test-bar-step with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
[2018-02-14 17:41:26,147] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x19e zxid:0xdb txntype:-1 reqpath:n/a Error Path:/brokers/topics/test-disturb-test-bar-step/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/test-disturb-test-bar-step/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:26,156] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x19f zxid:0xdc txntype:-1 reqpath:n/a Error Path:/brokers/topics/test-disturb-test-bar-step/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/test-disturb-test-bar-step/partitions (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:26,169] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions test-disturb-test-bar-step-0 (kafka.server.ReplicaFetcherManager)
[2018-02-14 17:41:26,171] INFO Loading producer state from offset 0 for partition test-disturb-test-bar-step-0 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:26,171] INFO Completed load of log test-disturb-test-bar-step-0 with 1 log segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2018-02-14 17:41:26,172] INFO Created log for partition [test-disturb-test-bar-step,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:26,173] INFO [Partition test-disturb-test-bar-step-0 broker=0] No checkpointed highwatermark is found for partition test-disturb-test-bar-step-0 (kafka.cluster.Partition)
[2018-02-14 17:41:26,173] INFO Replica loaded for partition test-disturb-test-bar-step-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:26,173] INFO [Partition test-disturb-test-bar-step-0 broker=0] test-disturb-test-bar-step-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
2018-02-14 17:41:26 [INFO] Ask job #1 for test0.835564001518630085 : bar
2018-02-14 17:41:26 [INFO] 🎉 Workflow test0.835564001518630085 is now finished in success
2018-02-14 17:41:26 [INFO] Step bar ack SUCCESS
2018-02-14 17:41:26 [INFO] Id test0.835564001518630085 is 'SUCCESS'
2018-02-14 17:41:26 [INFO] Workflow test0.835564001518630085 - Current Step status : RUNNING
2018-02-14 17:41:26 [INFO] Step bar ack SUCCESS
2018-02-14 17:41:26 [INFO] Id test0.835564001518630085 is 'SUCCESS'
2018-02-14 17:41:26 [INFO] Workflow test0.835564001518630085 - Current Step status : SUCCESS
2018-02-14 17:41:26 [INFO] Nb job(s) to run for boo : 1
2018-02-14 17:41:26 [INFO] Ask job #0 for test0.835564001518630085 : boo
2018-02-14 17:41:26 [INFO] Nb job(s) to run for noJobParallelized : 0
[2018-02-14 17:41:26,407] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:setData cxid:0x1a7 zxid:0xe0 txntype:-1 reqpath:n/a Error Path:/config/topics/test-disturb-test-boo-step Error:KeeperErrorCode = NoNode for /config/topics/test-disturb-test-boo-step (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:26,410] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x1a8 zxid:0xe1 txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:26,413] INFO Topic creation {"version":1,"partitions":{"0":[0]}} (kafka.admin.AdminUtils$)
[2018-02-14 17:41:26,416] INFO [KafkaApi-0] Auto creation of topic test-disturb-test-boo-step with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
[2018-02-14 17:41:26,425] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x1b0 zxid:0xe4 txntype:-1 reqpath:n/a Error Path:/brokers/topics/test-disturb-test-boo-step/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/test-disturb-test-boo-step/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:26,430] INFO Got user-level KeeperException when processing sessionid:0x161956683200000 type:create cxid:0x1b1 zxid:0xe5 txntype:-1 reqpath:n/a Error Path:/brokers/topics/test-disturb-test-boo-step/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/test-disturb-test-boo-step/partitions (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-02-14 17:41:26,438] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions test-disturb-test-boo-step-0 (kafka.server.ReplicaFetcherManager)
[2018-02-14 17:41:26,440] INFO Loading producer state from offset 0 for partition test-disturb-test-boo-step-0 with message format version 2 (kafka.log.Log)
[2018-02-14 17:41:26,441] INFO Completed load of log test-disturb-test-boo-step-0 with 1 log segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2018-02-14 17:41:26,442] INFO Created log for partition [test-disturb-test-boo-step,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2018-02-14 17:41:26,443] INFO [Partition test-disturb-test-boo-step-0 broker=0] No checkpointed highwatermark is found for partition test-disturb-test-boo-step-0 (kafka.cluster.Partition)
[2018-02-14 17:41:26,443] INFO Replica loaded for partition test-disturb-test-boo-step-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-02-14 17:41:26,443] INFO [Partition test-disturb-test-boo-step-0 broker=0] test-disturb-test-boo-step-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
2018-02-14 17:41:26 [WARNING] No job to run for noJobParallelized
2018-02-14 17:41:26 [INFO] 🎉 Workflow test0.835564001518630085 is now finished in success
2018-02-14 17:41:26 [INFO] Step boo ack SUCCESS
2018-02-14 17:41:26 [INFO] Id test0.835564001518630085 is 'SUCCESS'
2018-02-14 17:41:26 [INFO] Workflow test0.835564001518630085 - Current Step status : SUCCESS
2018-02-14 17:41:26 [INFO] Nb job(s) to run for noJobParallelizedBis : 0
2018-02-14 17:41:26 [WARNING] No job to run for noJobParallelizedBis
2018-02-14 17:41:26 [INFO] Nb job(s) to run for noJobParallelizedTris : 0
2018-02-14 17:41:26 [WARNING] No job to run for noJobParallelizedTris
2018-02-14 17:41:26 [WARNING] Current step skipped
2018-02-14 17:41:26 [INFO] 🎉 Workflow test0.835564001518630085 is now finished in success
.......    62 / 62 (100%)

Time: 17.9 seconds, Memory: 14.00MB

OK (62 tests, 125 assertions)

Generating code coverage report in Clover XML format ... done

Generating code coverage report in HTML format ... done
[2018-02-14 17:41:27,849] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: test-disturb-test-bar-step-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)
[2018-02-14 17:41:27,858] INFO Updated PartitionLeaderEpoch. New: {epoch:0, offset:0}, Current: {epoch:-1, offset-1} for Partition: test-disturb-test-boo-step-0. Cache now contains 0 entries. (kafka.server.epoch.LeaderEpochFileCache)

* @param String $topicName topic name
*
*/
public static function start(string $workflowId, array $payloadHash, string $brokers, string $topicName)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the client (turbo) should use https://github.com/vpg/disturb/blob/beta/Library/Topic/TopicService.php to define the topic name according to the client workflow name and the disturb topic naming logic

use \Phalcon\Mvc\User\Component;
use \Vpg\Disturb\Message;

class DisturbStarter extends Component
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe rename DisturbStarter in command to make the use more readable :
\Disturb\Client\Command::start(...)

@TravisBuddy
Copy link

Travis tests have failed

Hey mbrenguier,
Please read the following log in order to understand the failure reason.
It'll be awesome if you fix what's wrong and commit the changes.

1st Build

vendor/bin/phpcs --standard=./phpcs.xml ./Library/
FILE: /home/travis/build/vpg/disturb/Library/Client/Command.php
--------------------------------------------------------------------------------
FOUND 3 ERRORS AND 1 WARNING AFFECTING 4 LINES
--------------------------------------------------------------------------------
  1 | WARNING | A file should declare new symbols (classes, functions,
    |         | constants, etc.) and cause no other side effects, or it should
    |         | execute logic with side effects, but should not do both. The
    |         | first symbol is defined on line 11 and the first side effect is
    |         | on line 9.
 11 | ERROR   | Missing doc comment for class Command
 21 | ERROR   | Missing @return tag in function comment
 42 | ERROR   | Missing @return tag in function comment
--------------------------------------------------------------------------------

Time: 1.38 secs; Memory: 10Mb

vendor/phpunit/phpunit/phpunit -c Tests/phpunit.xml
PHPUnit 6.5.6 by Sebastian Bergmann and contributors.

Runtime:       PHP 7.1.11 with Xdebug 2.5.5
Configuration: /home/travis/build/vpg/disturb/Tests/phpunit.xml

2018-02-15 14:40:49 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-15 14:40:49 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
Undefined variable: di

@TravisBuddy
Copy link

Travis tests have failed

Hey mbrenguier,
Please read the following log in order to understand the failure reason.
It'll be awesome if you fix what's wrong and commit the changes.

1st Build

vendor/bin/phpcs --standard=./phpcs.xml ./Library/
FILE: /home/travis/build/vpg/disturb/Library/Client/Command.php
--------------------------------------------------------------------------------
FOUND 0 ERRORS AND 1 WARNING AFFECTING 1 LINE
--------------------------------------------------------------------------------
 1 | WARNING | A file should declare new symbols (classes, functions,
   |         | constants, etc.) and cause no other side effects, or it should
   |         | execute logic with side effects, but should not do both. The
   |         | first symbol is defined on line 18 and the first side effect is
   |         | on line 9.
--------------------------------------------------------------------------------

Time: 1.82 secs; Memory: 10Mb

vendor/phpunit/phpunit/phpunit -c Tests/phpunit.xml
PHPUnit 6.5.6 by Sebastian Bergmann and contributors.

Runtime:       PHP 7.1.11 with Xdebug 2.5.5
Configuration: /home/travis/build/vpg/disturb/Tests/phpunit.xml

2018-02-15 14:51:49 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
2018-02-15 14:51:49 [INFO] Connecting to Elastic "http:\/\/127.0.0.1:9200" on disturb_context
Undefined variable: di

@mbrenguier mbrenguier changed the title adding disturb starter adding disturb command Feb 16, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants