==> Synchronizing chroot copy [/home/alhp/workspace/chroot/root] -> [build_b7cbdcd8-e947-4c00-b258-8f106e541c5e]...done ==> Making package: seaweedfs 3.89-1.1 (Tue Jun 3 10:54:24 2025) ==> Retrieving sources... -> Downloading seaweedfs-3.89.tar.gz... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 19 28.6M 19 5755k 0 0 4852k 0 0:00:06 0:00:01 0:00:05 4852k 56 28.6M 56 16.1M 0 0 7592k 0 0:00:03 0:00:02 0:00:01 10.5M 94 28.6M 94 27.0M 0 0 8709k 0 0:00:03 0:00:03 --:--:-- 10.7M 100 28.6M 100 28.6M 0 0 8707k 0 0:00:03 0:00:03 --:--:-- 10.5M ==> Validating source files with sha256sums... seaweedfs-3.89.tar.gz ... Passed ==> Making package: seaweedfs 3.89-1.1 (Tue Jun 3 08:54:30 2025) ==> Checking runtime dependencies... ==> Installing missing dependencies... resolving dependencies... looking for conflicting packages... Package (1) New Version Net Change extra/mailcap 2.1.54-2 0.11 MiB Total Installed Size: 0.11 MiB :: Proceed with installation? [Y/n] checking keyring... checking package integrity... loading package files... checking for file conflicts... :: Processing package changes... installing mailcap... :: Running post-transaction hooks... (1/1) Arming ConditionNeedsUpdate... ==> Checking buildtime dependencies... ==> Installing missing dependencies... resolving dependencies... looking for conflicting packages... Package (1) New Version Net Change extra/go 2:1.24.3-1 237.84 MiB Total Installed Size: 237.84 MiB :: Proceed with installation? [Y/n] checking keyring... checking package integrity... loading package files... checking for file conflicts... :: Processing package changes... installing go... :: Running post-transaction hooks... (1/1) Arming ConditionNeedsUpdate... ==> Retrieving sources... -> Found seaweedfs-3.89.tar.gz ==> WARNING: Skipping all source file integrity checks. ==> Extracting sources... -> Extracting seaweedfs-3.89.tar.gz with bsdtar ==> Starting prepare()... ==> Starting build()... ==> Starting check()... ? github.com/seaweedfs/seaweedfs/weed [no test files] === RUN TestConcurrentAddRemoveNodes --- PASS: TestConcurrentAddRemoveNodes (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/cluster 0.005s === RUN TestAddServer I0603 08:55:48.729598 lock_ring.go:43 add server localhost:8080 I0603 08:55:48.729712 lock_ring.go:43 add server localhost:8081 I0603 08:55:48.729718 lock_ring.go:43 add server localhost:8082 I0603 08:55:48.729720 lock_ring.go:43 add server localhost:8083 I0603 08:55:48.729722 lock_ring.go:43 add server localhost:8084 I0603 08:55:48.729724 lock_ring.go:59 remove server localhost:8084 I0603 08:55:48.729726 lock_ring.go:59 remove server localhost:8082 I0603 08:55:48.729728 lock_ring.go:59 remove server localhost:8080 --- PASS: TestAddServer (0.11s) === RUN TestLockRing --- PASS: TestLockRing (0.22s) PASS ok github.com/seaweedfs/seaweedfs/weed/cluster/lock_manager 0.338s === RUN TestReadingTomlConfiguration database is map[connection_max:5000 enabled:true ports:[8001 8001 8002] server:192.168.1.1] servers is map[alpha:map[dc:eqdc10 ip:10.0.0.1] beta:map[dc:eqdc10 ip:10.0.0.2]] alpha ip is 10.0.0.1 --- PASS: TestReadingTomlConfiguration (0.00s) === RUN TestXYZ I0603 08:55:50.826223 volume_test.go:12 Last-Modified Mon, 08 Jul 2013 08:53:16 GMT --- PASS: TestXYZ (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/command 0.027s ? github.com/seaweedfs/seaweedfs/weed/command/scaffold [no test files] === RUN TestChunkGroup_doSearchChunks --- PASS: TestChunkGroup_doSearchChunks (0.00s) === RUN TestDoMaybeManifestize test 0 test 1 test 2 test 3 --- PASS: TestDoMaybeManifestize (0.00s) === RUN Test_removeGarbageChunks --- PASS: Test_removeGarbageChunks (0.00s) === RUN TestDoMinusChunks 2025/06/03 08:55:50 first deleted chunks: [file_id:"1" size:3 modified_ts_ns:100 source_file_id:"11" file_id:"2" offset:3 size:3 modified_ts_ns:200 file_id:"3" offset:6 size:3 modified_ts_ns:300 source_file_id:"33"] 2025/06/03 08:55:50 clusterA synced empty chunks event result: [] --- PASS: TestDoMinusChunks (0.00s) === RUN TestCompactFileChunksRealCase I0603 08:55:50.811836 filechunks2_test.go:84 before chunk 2,512f31f2c0700a [ 0, 25) I0603 08:55:50.812116 filechunks2_test.go:84 before chunk 6,512f2c2e24e9e8 [ 868352, 917585) I0603 08:55:50.812122 filechunks2_test.go:84 before chunk 7,514468dd5954ca [ 884736, 901120) I0603 08:55:50.812124 filechunks2_test.go:84 before chunk 5,5144463173fe77 [ 917504, 2297856) I0603 08:55:50.812126 filechunks2_test.go:84 before chunk 4,51444c7ab54e2d [ 2301952, 2367488) I0603 08:55:50.812128 filechunks2_test.go:84 before chunk 4,514450e643ad22 [ 2371584, 2420736) I0603 08:55:50.812130 filechunks2_test.go:84 before chunk 6,514456a5e9e4d7 [ 2449408, 2490368) I0603 08:55:50.812131 filechunks2_test.go:84 before chunk 3,51444f8d53eebe [ 2494464, 2555904) I0603 08:55:50.812133 filechunks2_test.go:84 before chunk 4,5144578b097c7e [ 2560000, 2596864) I0603 08:55:50.812135 filechunks2_test.go:84 before chunk 3,51445500b6b4ac [ 2637824, 2678784) I0603 08:55:50.812137 filechunks2_test.go:84 before chunk 1,51446285e52a61 [ 2695168, 2715648) I0603 08:55:50.812150 filechunks2_test.go:84 compacted chunk 2,512f31f2c0700a [ 0, 25) I0603 08:55:50.812153 filechunks2_test.go:84 compacted chunk 6,512f2c2e24e9e8 [ 868352, 917585) I0603 08:55:50.812155 filechunks2_test.go:84 compacted chunk 7,514468dd5954ca [ 884736, 901120) I0603 08:55:50.812157 filechunks2_test.go:84 compacted chunk 5,5144463173fe77 [ 917504, 2297856) I0603 08:55:50.812159 filechunks2_test.go:84 compacted chunk 4,51444c7ab54e2d [ 2301952, 2367488) I0603 08:55:50.812161 filechunks2_test.go:84 compacted chunk 4,514450e643ad22 [ 2371584, 2420736) I0603 08:55:50.812162 filechunks2_test.go:84 compacted chunk 6,514456a5e9e4d7 [ 2449408, 2490368) I0603 08:55:50.812164 filechunks2_test.go:84 compacted chunk 3,51444f8d53eebe [ 2494464, 2555904) I0603 08:55:50.812166 filechunks2_test.go:84 compacted chunk 4,5144578b097c7e [ 2560000, 2596864) I0603 08:55:50.812168 filechunks2_test.go:84 compacted chunk 3,51445500b6b4ac [ 2637824, 2678784) I0603 08:55:50.812170 filechunks2_test.go:84 compacted chunk 1,51446285e52a61 [ 2695168, 2715648) --- PASS: TestCompactFileChunksRealCase (0.00s) === RUN TestReadResolvedChunks resolved to 4 visible intervales [0,50) a 1 [50,150) b 2 [175,275) e 5 [275,300) d 4 --- PASS: TestReadResolvedChunks (0.00s) === RUN TestReadResolvedChunks2 resolved to 2 visible intervales [200,225) e 5 [225,250) c 3 --- PASS: TestReadResolvedChunks2 (0.00s) === RUN TestRandomizedReadResolvedChunks --- PASS: TestRandomizedReadResolvedChunks (0.00s) === RUN TestSequentialReadResolvedChunks visibles 13--- PASS: TestSequentialReadResolvedChunks (0.00s) === RUN TestActualReadResolvedChunks [0,2097152) 5,e7b96fef48 1634447487595823000 [2097152,4194304) 5,e5562640b9 1634447487595826000 [4194304,6291456) 5,df033e0fe4 1634447487595827000 [6291456,8388608) 7,eb08148a9b 1634447487595827000 [8388608,10485760) 7,e0f92d1604 1634447487595828000 [10485760,12582912) 7,e33cb63262 1634447487595828000 [12582912,14680064) 5,ea98e40e93 1634447487595829000 [14680064,16777216) 5,e165661172 1634447487595829000 [16777216,18874368) 3,e692097486 1634447487595830000 [18874368,20971520) 3,e28e2e3cbd 1634447487595830000 [20971520,23068672) 3,e443974d4e 1634447487595830000 [23068672,25165824) 2,e815bed597 1634447487595831000 [25165824,27140560) 5,e94715199e 1634447487595832000 --- PASS: TestActualReadResolvedChunks (0.00s) === RUN TestActualReadResolvedChunks2 [0,184320) 1,e7b96fef48 1 [184320,188416) 2,33562640b9 4 [188416,2285568) 4,df033e0fe4 3 --- PASS: TestActualReadResolvedChunks2 (0.00s) === RUN TestCompactFileChunks --- PASS: TestCompactFileChunks (0.00s) === RUN TestCompactFileChunks2 --- PASS: TestCompactFileChunks2 (0.00s) === RUN TestRandomFileChunksCompact --- PASS: TestRandomFileChunksCompact (0.00s) === RUN TestIntervalMerging 2025/06/03 08:55:50 ++++++++++ merged test case 0 ++++++++++++++++++++ 2025/06/03 08:55:50 test case 0, interval start=0, stop=100, fileId=abc 2025/06/03 08:55:50 test case 0, interval start=100, stop=200, fileId=asdf 2025/06/03 08:55:50 test case 0, interval start=200, stop=300, fileId=fsad 2025/06/03 08:55:50 ++++++++++ merged test case 1 ++++++++++++++++++++ 2025/06/03 08:55:50 test case 1, interval start=0, stop=200, fileId=asdf 2025/06/03 08:55:50 ++++++++++ merged test case 2 ++++++++++++++++++++ 2025/06/03 08:55:50 test case 2, interval start=0, stop=70, fileId=b 2025/06/03 08:55:50 test case 2, interval start=70, stop=100, fileId=a 2025/06/03 08:55:50 ++++++++++ merged test case 3 ++++++++++++++++++++ 2025/06/03 08:55:50 test case 3, interval start=0, stop=50, fileId=asdf 2025/06/03 08:55:50 test case 3, interval start=50, stop=300, fileId=xxxx 2025/06/03 08:55:50 ++++++++++ merged test case 4 ++++++++++++++++++++ 2025/06/03 08:55:50 test case 4, interval start=0, stop=200, fileId=asdf 2025/06/03 08:55:50 test case 4, interval start=250, stop=500, fileId=xxxx 2025/06/03 08:55:50 ++++++++++ merged test case 5 ++++++++++++++++++++ 2025/06/03 08:55:50 test case 5, interval start=0, stop=200, fileId=d 2025/06/03 08:55:50 test case 5, interval start=200, stop=220, fileId=c 2025/06/03 08:55:50 ++++++++++ merged test case 6 ++++++++++++++++++++ 2025/06/03 08:55:50 test case 6, interval start=0, stop=100, fileId=xyz 2025/06/03 08:55:50 ++++++++++ merged test case 7 ++++++++++++++++++++ 2025/06/03 08:55:50 test case 7, interval start=0, stop=2097152, fileId=3,029565bf3092 2025/06/03 08:55:50 test case 7, interval start=2097152, stop=5242880, fileId=6,029632f47ae2 2025/06/03 08:55:50 test case 7, interval start=5242880, stop=8388608, fileId=2,029734c5aa10 2025/06/03 08:55:50 test case 7, interval start=8388608, stop=11534336, fileId=5,02982f80de50 2025/06/03 08:55:50 test case 7, interval start=11534336, stop=14376529, fileId=7,0299ad723803 2025/06/03 08:55:50 ++++++++++ merged test case 8 ++++++++++++++++++++ 2025/06/03 08:55:50 test case 8, interval start=0, stop=77824, fileId=4,0b3df938e301 2025/06/03 08:55:50 test case 8, interval start=77824, stop=208896, fileId=4,0b3f0c7202f0 2025/06/03 08:55:50 test case 8, interval start=208896, stop=339968, fileId=2,0b4031a72689 2025/06/03 08:55:50 test case 8, interval start=339968, stop=471040, fileId=3,0b416a557362 2025/06/03 08:55:50 test case 8, interval start=471040, stop=472225, fileId=6,0b3e0650019c --- PASS: TestIntervalMerging (0.00s) === RUN TestChunksReading 2025/06/03 08:55:50 ++++++++++ read test case 0 ++++++++++++++++++++ 2025/06/03 08:55:50 read case 0, chunk 0, offset=0, size=100, fileId=abc 2025/06/03 08:55:50 read case 0, chunk 1, offset=0, size=100, fileId=asdf 2025/06/03 08:55:50 read case 0, chunk 2, offset=0, size=50, fileId=fsad 2025/06/03 08:55:50 ++++++++++ read test case 1 ++++++++++++++++++++ 2025/06/03 08:55:50 read case 1, chunk 0, offset=50, size=100, fileId=asdf 2025/06/03 08:55:50 ++++++++++ read test case 2 ++++++++++++++++++++ 2025/06/03 08:55:50 read case 2, chunk 0, offset=20, size=30, fileId=b 2025/06/03 08:55:50 read case 2, chunk 1, offset=57, size=10, fileId=a 2025/06/03 08:55:50 ++++++++++ read test case 3 ++++++++++++++++++++ 2025/06/03 08:55:50 read case 3, chunk 0, offset=0, size=50, fileId=asdf 2025/06/03 08:55:50 read case 3, chunk 1, offset=0, size=150, fileId=xxxx 2025/06/03 08:55:50 ++++++++++ read test case 4 ++++++++++++++++++++ 2025/06/03 08:55:50 read case 4, chunk 0, offset=0, size=200, fileId=asdf 2025/06/03 08:55:50 read case 4, chunk 1, offset=0, size=150, fileId=xxxx 2025/06/03 08:55:50 ++++++++++ read test case 5 ++++++++++++++++++++ 2025/06/03 08:55:50 read case 5, chunk 0, offset=0, size=200, fileId=c 2025/06/03 08:55:50 read case 5, chunk 1, offset=130, size=20, fileId=b 2025/06/03 08:55:50 ++++++++++ read test case 6 ++++++++++++++++++++ 2025/06/03 08:55:50 read case 6, chunk 0, offset=0, size=100, fileId=xyz 2025/06/03 08:55:50 ++++++++++ read test case 7 ++++++++++++++++++++ 2025/06/03 08:55:50 read case 7, chunk 0, offset=0, size=100, fileId=abc 2025/06/03 08:55:50 read case 7, chunk 1, offset=0, size=100, fileId=asdf 2025/06/03 08:55:50 ++++++++++ read test case 8 ++++++++++++++++++++ 2025/06/03 08:55:50 read case 8, chunk 0, offset=0, size=90, fileId=abc 2025/06/03 08:55:50 read case 8, chunk 1, offset=0, size=100, fileId=asdf 2025/06/03 08:55:50 read case 8, chunk 2, offset=0, size=110, fileId=fsad 2025/06/03 08:55:50 ++++++++++ read test case 9 ++++++++++++++++++++ 2025/06/03 08:55:50 read case 9, chunk 0, offset=0, size=43175936, fileId=2,111fc2cbfac1 2025/06/03 08:55:50 read case 9, chunk 1, offset=0, size=9805824, fileId=2,112a36ea7f85 2025/06/03 08:55:50 read case 9, chunk 2, offset=0, size=19582976, fileId=4,112d5f31c5e7 2025/06/03 08:55:50 read case 9, chunk 3, offset=0, size=60690432, fileId=1,113245f0cdb6 2025/06/03 08:55:50 read case 9, chunk 4, offset=0, size=4014080, fileId=3,1141a70733b5 2025/06/03 08:55:50 read case 9, chunk 5, offset=0, size=16309588, fileId=1,114201d5bbdb --- PASS: TestChunksReading (0.00s) === RUN TestViewFromVisibleIntervals --- PASS: TestViewFromVisibleIntervals (0.00s) === RUN TestViewFromVisibleIntervals2 --- PASS: TestViewFromVisibleIntervals2 (0.00s) === RUN TestViewFromVisibleIntervals3 --- PASS: TestViewFromVisibleIntervals3 (0.00s) === RUN TestCompactFileChunks3 --- PASS: TestCompactFileChunks3 (0.00s) === RUN TestFilerConf --- PASS: TestFilerConf (0.00s) === RUN TestProtoMarshal e to: 234,2423423422 * 2342342354223234,2342342342"# 0Ø: text/jsonP --- PASS: TestProtoMarshal (0.00s) === RUN TestIntervalList_Overlay [0,25) 6 6 [25,50) 1 1 [50,150) 2 2 [175,210) 5 5 [210,225) 3 3 [225,250) 4 4 [0,25) 6 6 [25,50) 1 1 [50,150) 7 7 [175,210) 5 5 [210,225) 3 3 [225,250) 4 4 --- PASS: TestIntervalList_Overlay (0.00s) === RUN TestIntervalList_Overlay2 [0,50) 2 2 [50,100) 1 1 --- PASS: TestIntervalList_Overlay2 (0.00s) === RUN TestIntervalList_Overlay3 [0,60) 2 2 [60,100) 1 1 --- PASS: TestIntervalList_Overlay3 (0.00s) === RUN TestIntervalList_Overlay4 [0,100) 2 2 --- PASS: TestIntervalList_Overlay4 (0.00s) === RUN TestIntervalList_Overlay5 [0,110) 2 2 --- PASS: TestIntervalList_Overlay5 (0.00s) === RUN TestIntervalList_Overlay6 [50,110) 2 2 --- PASS: TestIntervalList_Overlay6 (0.00s) === RUN TestIntervalList_Overlay7 [50,90) 2 2 [90,100) 1 1 --- PASS: TestIntervalList_Overlay7 (0.00s) === RUN TestIntervalList_Overlay8 [50,60) 1 1 [60,90) 2 2 [90,100) 1 1 --- PASS: TestIntervalList_Overlay8 (0.00s) === RUN TestIntervalList_Overlay9 [50,60) 1 1 [60,100) 2 2 --- PASS: TestIntervalList_Overlay9 (0.00s) === RUN TestIntervalList_Overlay10 [50,60) 1 1 [60,110) 2 2 --- PASS: TestIntervalList_Overlay10 (0.00s) === RUN TestIntervalList_Overlay11 [0,90) 5 5 [90,100) 1 1 [100,110) 2 2 --- PASS: TestIntervalList_Overlay11 (0.00s) === RUN TestIntervalList_insertInterval1 [50,150) 2 2 [200,250) 3 3 --- PASS: TestIntervalList_insertInterval1 (0.00s) === RUN TestIntervalList_insertInterval2 [0,25) 3 3 [50,150) 2 2 --- PASS: TestIntervalList_insertInterval2 (0.00s) === RUN TestIntervalList_insertInterval3 [0,75) 3 3 [75,150) 2 2 [200,250) 4 4 --- PASS: TestIntervalList_insertInterval3 (0.00s) === RUN TestIntervalList_insertInterval4 [0,200) 3 3 [200,250) 4 4 --- PASS: TestIntervalList_insertInterval4 (0.00s) === RUN TestIntervalList_insertInterval5 [0,225) 5 5 [225,250) 4 4 --- PASS: TestIntervalList_insertInterval5 (0.00s) === RUN TestIntervalList_insertInterval6 [0,50) 1 1 [50,150) 2 2 [150,200) 1 1 [200,250) 4 4 [250,275) 1 1 --- PASS: TestIntervalList_insertInterval6 (0.00s) === RUN TestIntervalList_insertInterval7 [50,150) 2 2 [150,200) 1 1 [200,250) 4 4 [250,275) 1 1 --- PASS: TestIntervalList_insertInterval7 (0.00s) === RUN TestIntervalList_insertInterval8 [50,75) 2 2 [75,200) 3 3 [200,250) 4 4 [250,275) 3 3 --- PASS: TestIntervalList_insertInterval8 (0.00s) === RUN TestIntervalList_insertInterval9 [50,150) 3 3 [200,250) 4 4 --- PASS: TestIntervalList_insertInterval9 (0.00s) === RUN TestIntervalList_insertInterval10 [50,100) 2 2 [100,200) 5 5 [200,300) 4 4 --- PASS: TestIntervalList_insertInterval10 (0.00s) === RUN TestIntervalList_insertInterval11 [0,64) 1 1 [64,68) 2 2 [68,72) 4 4 [72,136) 3 3 --- PASS: TestIntervalList_insertInterval11 (0.00s) === RUN TestIntervalList_insertIntervalStruct [0,64) 1 {1 0 0} [64,68) 4 {4 0 0} [68,72) 2 {2 0 0} [72,136) 3 {3 0 0} --- PASS: TestIntervalList_insertIntervalStruct (0.00s) === RUN TestReaderAt --- PASS: TestReaderAt (0.00s) === RUN TestReaderAt0 --- PASS: TestReaderAt0 (0.00s) === RUN TestReaderAt1 --- PASS: TestReaderAt1 (0.00s) === RUN TestReaderAtGappedChunksDoNotLeak --- PASS: TestReaderAtGappedChunksDoNotLeak (0.00s) === RUN TestReaderAtSparseFileDoesNotLeak --- PASS: TestReaderAtSparseFileDoesNotLeak (0.00s) === RUN TestFilerRemoteStorage_FindRemoteStorageClient --- PASS: TestFilerRemoteStorage_FindRemoteStorageClient (0.00s) === RUN TestS3Conf --- PASS: TestS3Conf (0.00s) === RUN TestCheckDuplicateAccessKey --- PASS: TestCheckDuplicateAccessKey (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/filer 0.018s ? github.com/seaweedfs/seaweedfs/weed/filer/abstract_sql [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/arangodb [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/cassandra [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/cassandra2 [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/elastic/v7 [no test files] === RUN TestStore --- PASS: TestStore (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/filer/etcd 0.015s ? github.com/seaweedfs/seaweedfs/weed/filer/hbase [no test files] === RUN TestCreateAndFind I0603 08:55:50.819299 leveldb_store.go:47 filer store dir: /tmp/TestCreateAndFind2639262728/001 I0603 08:55:50.819553 file_util.go:27 Folder /tmp/TestCreateAndFind2639262728/001 Permission: -rwxr-xr-x I0603 08:55:50.820194 filer.go:155 create filer.store.id to 265485282 --- PASS: TestCreateAndFind (0.01s) === RUN TestEmptyRoot I0603 08:55:50.822612 leveldb_store.go:47 filer store dir: /tmp/TestEmptyRoot2846497966/001 I0603 08:55:50.822632 file_util.go:27 Folder /tmp/TestEmptyRoot2846497966/001 Permission: -rwxr-xr-x I0603 08:55:50.823152 filer.go:155 create filer.store.id to -1330040309 --- PASS: TestEmptyRoot (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/filer/leveldb 0.023s === RUN TestCreateAndFind I0603 08:55:50.817574 leveldb2_store.go:43 filer store leveldb2 dir: /tmp/TestCreateAndFind3344237684/001 I0603 08:55:50.817888 file_util.go:27 Folder /tmp/TestCreateAndFind3344237684/001 Permission: -rwxr-xr-x I0603 08:55:50.818821 filer.go:155 create filer.store.id to -691017760 --- PASS: TestCreateAndFind (0.01s) === RUN TestEmptyRoot I0603 08:55:50.822152 leveldb2_store.go:43 filer store leveldb2 dir: /tmp/TestEmptyRoot243056273/001 I0603 08:55:50.822168 file_util.go:27 Folder /tmp/TestEmptyRoot243056273/001 Permission: -rwxr-xr-x I0603 08:55:50.822951 filer.go:155 create filer.store.id to 1128328959 --- PASS: TestEmptyRoot (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/filer/leveldb2 0.023s === RUN TestCreateAndFind I0603 08:55:50.816889 leveldb3_store.go:50 filer store leveldb3 dir: /tmp/TestCreateAndFind884995750/001 I0603 08:55:50.817072 file_util.go:27 Folder /tmp/TestCreateAndFind884995750/001 Permission: -rwxr-xr-x I0603 08:55:50.817758 filer.go:155 create filer.store.id to -2020485997 --- PASS: TestCreateAndFind (0.01s) === RUN TestEmptyRoot I0603 08:55:50.822023 leveldb3_store.go:50 filer store leveldb3 dir: /tmp/TestEmptyRoot2438917390/001 I0603 08:55:50.822044 file_util.go:27 Folder /tmp/TestEmptyRoot2438917390/001 Permission: -rwxr-xr-x I0603 08:55:50.822568 filer.go:155 create filer.store.id to 1431152701 --- PASS: TestEmptyRoot (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/filer/leveldb3 0.022s ? github.com/seaweedfs/seaweedfs/weed/filer/mongodb [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/mysql [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/mysql2 [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/postgres [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/postgres2 [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/redis [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/redis2 [no test files] testing: warning: no tests to run PASS ok github.com/seaweedfs/seaweedfs/weed/filer/redis3 0.015s [no tests to run] ? github.com/seaweedfs/seaweedfs/weed/filer/redis_lua [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/redis_lua/stored_procedure [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/sqlite [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/store_test [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/tarantool [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/tikv [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer/ydb [no test files] ? github.com/seaweedfs/seaweedfs/weed/filer_client [no test files] === RUN TestShortHostname --- PASS: TestShortHostname (0.00s) === RUN TestInfo I0603 08:55:51.664686 glog_test.go:92 test --- PASS: TestInfo (0.00s) === RUN TestInfoDepth I0603 08:55:51.664745 glog_test.go:109 depth-test0 I0603 08:55:51.664746 glog_test.go:110 depth-test1 --- PASS: TestInfoDepth (0.00s) === RUN TestCopyStandardLogToPanic --- PASS: TestCopyStandardLogToPanic (0.00s) === RUN TestStandardLog I0603 08:55:51.664830 glog_test.go:163 test --- PASS: TestStandardLog (0.00s) === RUN TestHeader I0102 15:04:05.067890 glog_test.go:181 test --- PASS: TestHeader (0.00s) === RUN TestError E0603 08:55:51.664871 glog_test.go:202 test --- PASS: TestError (0.00s) === RUN TestWarning W0603 08:55:51.664884 glog_test.go:224 test --- PASS: TestWarning (0.00s) === RUN TestV I0603 08:55:51.664895 glog_test.go:243 test --- PASS: TestV (0.00s) === RUN TestVmoduleOn I0603 08:55:51.664925 glog_test.go:267 test --- PASS: TestVmoduleOn (0.00s) === RUN TestVmoduleOff --- PASS: TestVmoduleOff (0.00s) === RUN TestVmoduleGlob --- PASS: TestVmoduleGlob (0.00s) === RUN TestRollover I0603 08:55:51.664995 glog_test.go:339 x I0603 08:55:51.665341 glog_test.go:348 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx I0603 08:55:52.668806 glog_test.go:361 x --- PASS: TestRollover (1.00s) === RUN TestLogBacktraceAt I0603 08:55:52.668983 glog_test.go:395 we want a stack trace here goroutine 21 [running]: github.com/seaweedfs/seaweedfs/weed/glog.stacks(0x0) /startdir/src/seaweedfs-3.89/weed/glog/glog.go:768 +0x85 github.com/seaweedfs/seaweedfs/weed/glog.(*loggingT).output(0x6ed860, 0x0, 0xc0000dc1c0, {0x60bcae?, 0x1?}, 0x0?, 0x0) /startdir/src/seaweedfs-3.89/weed/glog/glog.go:677 +0xe5 github.com/seaweedfs/seaweedfs/weed/glog.(*loggingT).printDepth(0x6ed860, 0x0, 0xc000090e90?, {0xc000090e30, 0x1, 0x1}) /startdir/src/seaweedfs-3.89/weed/glog/glog.go:648 +0xea github.com/seaweedfs/seaweedfs/weed/glog.(*loggingT).print(...) /startdir/src/seaweedfs-3.89/weed/glog/glog.go:639 github.com/seaweedfs/seaweedfs/weed/glog.Info(...) /startdir/src/seaweedfs-3.89/weed/glog/glog.go:1061 github.com/seaweedfs/seaweedfs/weed/glog.TestLogBacktraceAt(0xc0000e7a40) /startdir/src/seaweedfs-3.89/weed/glog/glog_test.go:395 +0x438 testing.tRunner(0xc0000e7a40, 0x5aade8) /usr/lib/go/src/testing/testing.go:1792 +0xf4 created by testing.(*T).Run in goroutine 1 /usr/lib/go/src/testing/testing.go:1851 +0x413 --- PASS: TestLogBacktraceAt (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/glog 1.006s === RUN TestGetActionsUserPath --- PASS: TestGetActionsUserPath (0.00s) === RUN TestGetActionsWildcardPath --- PASS: TestGetActionsWildcardPath (0.00s) === RUN TestGetActionsInvalidAction --- PASS: TestGetActionsInvalidAction (0.00s) === RUN TestCreateUser --- PASS: TestCreateUser (0.00s) === RUN TestListUsers --- PASS: TestListUsers (0.00s) === RUN TestListAccessKeys --- PASS: TestListAccessKeys (0.00s) === RUN TestGetUser --- PASS: TestGetUser (0.00s) === RUN TestCreatePolicy --- PASS: TestCreatePolicy (0.00s) === RUN TestPutUserPolicy --- PASS: TestPutUserPolicy (0.00s) === RUN TestPutUserPolicyError E0603 08:55:52.231994 iamapi_management_handlers.go:508 PutUserPolicy: the user with name InvalidUser cannot be found E0603 08:55:52.232151 iamapi_handlers.go:29 Response the user with name InvalidUser cannot be found --- PASS: TestPutUserPolicyError (0.00s) === RUN TestGetUserPolicy --- PASS: TestGetUserPolicy (0.00s) === RUN TestUpdateUser --- PASS: TestUpdateUser (0.00s) === RUN TestDeleteUser --- PASS: TestDeleteUser (0.00s) === RUN TestHandleImplicitUsername --- PASS: TestHandleImplicitUsername (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/iamapi 0.016s === RUN TestCropping --- PASS: TestCropping (0.07s) === RUN TestXYZ --- PASS: TestXYZ (0.33s) === RUN TestResizing --- PASS: TestResizing (0.02s) PASS ok github.com/seaweedfs/seaweedfs/weed/images 0.430s === RUN TestInodeEntry_removeOnePath === RUN TestInodeEntry_removeOnePath/actual_case === RUN TestInodeEntry_removeOnePath/empty === RUN TestInodeEntry_removeOnePath/single === RUN TestInodeEntry_removeOnePath/first === RUN TestInodeEntry_removeOnePath/middle === RUN TestInodeEntry_removeOnePath/last === RUN TestInodeEntry_removeOnePath/not_found --- PASS: TestInodeEntry_removeOnePath (0.00s) --- PASS: TestInodeEntry_removeOnePath/actual_case (0.00s) --- PASS: TestInodeEntry_removeOnePath/empty (0.00s) --- PASS: TestInodeEntry_removeOnePath/single (0.00s) --- PASS: TestInodeEntry_removeOnePath/first (0.00s) --- PASS: TestInodeEntry_removeOnePath/middle (0.00s) --- PASS: TestInodeEntry_removeOnePath/last (0.00s) --- PASS: TestInodeEntry_removeOnePath/not_found (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/mount 0.010s ? github.com/seaweedfs/seaweedfs/weed/mount/meta_cache [no test files] === RUN Test_PageChunkWrittenIntervalList --- PASS: Test_PageChunkWrittenIntervalList (0.00s) === RUN Test_PageChunkWrittenIntervalList1 --- PASS: Test_PageChunkWrittenIntervalList1 (0.00s) === RUN TestUploadPipeline --- PASS: TestUploadPipeline (18.63s) PASS ok github.com/seaweedfs/seaweedfs/weed/mount/page_writer 18.637s ? github.com/seaweedfs/seaweedfs/weed/mount/unmount [no test files] ? github.com/seaweedfs/seaweedfs/weed/mq/agent [no test files] ? github.com/seaweedfs/seaweedfs/weed/mq/broker [no test files] ? github.com/seaweedfs/seaweedfs/weed/mq/client/agent_client [no test files] ? github.com/seaweedfs/seaweedfs/weed/mq/client/pub_client [no test files] ? github.com/seaweedfs/seaweedfs/weed/mq/client/sub_client [no test files] ? github.com/seaweedfs/seaweedfs/weed/mq/logstore [no test files] === RUN Test_allocateOneBroker === RUN Test_allocateOneBroker/test_only_one_broker I0603 08:55:52.464545 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 1, followerCount: 1, assignments: [partition:{ring_size:2520 range_stop:2520 unix_time_ns:1748940952464536058}] I0603 08:55:52.465145 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 1, followerCount: 1, assignments: [partition:{ring_size:2520 range_stop:2520 unix_time_ns:1748940952464536058} leader_broker:"localhost:17777"] hasChanges: true I0603 08:55:52.465187 allocate.go:33 allocate topic partitions 1: [partition:{ring_size:2520 range_stop:2520 unix_time_ns:1748940952464536058} leader_broker:"localhost:17777"] --- PASS: Test_allocateOneBroker (0.00s) --- PASS: Test_allocateOneBroker/test_only_one_broker (0.00s) === RUN TestEnsureAssignmentsToActiveBrokersX === RUN TestEnsureAssignmentsToActiveBrokersX/test_empty_leader test empty leader before [partition:{} follower_broker:"localhost:2"] I0603 08:55:52.465300 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} follower_broker:"localhost:2"] I0603 08:55:52.465368 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:6" follower_broker:"localhost:2"] hasChanges: true test empty leader after [partition:{} leader_broker:"localhost:6" follower_broker:"localhost:2"] === RUN TestEnsureAssignmentsToActiveBrokersX/test_empty_follower test empty follower before [partition:{} leader_broker:"localhost:1"] I0603 08:55:52.465397 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:1"] I0603 08:55:52.465425 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:6"] hasChanges: true test empty follower after [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:6"] === RUN TestEnsureAssignmentsToActiveBrokersX/test_dead_follower test dead follower before [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:200"] I0603 08:55:52.465455 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:200"] I0603 08:55:52.465488 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:5"] hasChanges: true test dead follower after [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:5"] === RUN TestEnsureAssignmentsToActiveBrokersX/test_dead_leader_and_follower test dead leader and follower before [partition:{} leader_broker:"localhost:100" follower_broker:"localhost:200"] I0603 08:55:52.465512 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:100" follower_broker:"localhost:200"] I0603 08:55:52.465542 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 6, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:3" follower_broker:"localhost:1"] hasChanges: true test dead leader and follower after [partition:{} leader_broker:"localhost:3" follower_broker:"localhost:1"] === RUN TestEnsureAssignmentsToActiveBrokersX/test_low_active_brokers test low active brokers before [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:2"] I0603 08:55:52.465575 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 2, followerCount: 3, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:2"] I0603 08:55:52.465627 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 2, followerCount: 3, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:2"] hasChanges: false test low active brokers after [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:2"] === RUN TestEnsureAssignmentsToActiveBrokersX/test_low_active_brokers_with_one_follower test low active brokers with one follower before [partition:{} leader_broker:"localhost:1"] I0603 08:55:52.465682 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 2, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:1"] I0603 08:55:52.465701 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 2, followerCount: 1, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:1"] hasChanges: true test low active brokers with one follower after [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:1"] === RUN TestEnsureAssignmentsToActiveBrokersX/test_single_active_broker test single active broker before [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:2"] I0603 08:55:52.465718 allocate.go:81 EnsureAssignmentsToActiveBrokers: activeBrokers: 1, followerCount: 3, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:2"] I0603 08:55:52.465733 allocate.go:125 EnsureAssignmentsToActiveBrokers: activeBrokers: 1, followerCount: 3, assignments: [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:1"] hasChanges: true test single active broker after [partition:{} leader_broker:"localhost:1" follower_broker:"localhost:1"] --- PASS: TestEnsureAssignmentsToActiveBrokersX (0.00s) --- PASS: TestEnsureAssignmentsToActiveBrokersX/test_empty_leader (0.00s) --- PASS: TestEnsureAssignmentsToActiveBrokersX/test_empty_follower (0.00s) --- PASS: TestEnsureAssignmentsToActiveBrokersX/test_dead_follower (0.00s) --- PASS: TestEnsureAssignmentsToActiveBrokersX/test_dead_leader_and_follower (0.00s) --- PASS: TestEnsureAssignmentsToActiveBrokersX/test_low_active_brokers (0.00s) --- PASS: TestEnsureAssignmentsToActiveBrokersX/test_low_active_brokers_with_one_follower (0.00s) --- PASS: TestEnsureAssignmentsToActiveBrokersX/test_single_active_broker (0.00s) === RUN TestBalanceTopicPartitionOnBrokers === RUN TestBalanceTopicPartitionOnBrokers/test --- PASS: TestBalanceTopicPartitionOnBrokers (0.00s) --- PASS: TestBalanceTopicPartitionOnBrokers/test (0.00s) === RUN Test_findMissingPartitions === RUN Test_findMissingPartitions/one_partition === RUN Test_findMissingPartitions/two_partitions === RUN Test_findMissingPartitions/four_partitions,_missing_last_two === RUN Test_findMissingPartitions/four_partitions,_missing_first_two === RUN Test_findMissingPartitions/four_partitions,_missing_middle_two === RUN Test_findMissingPartitions/four_partitions,_missing_three --- PASS: Test_findMissingPartitions (0.00s) --- PASS: Test_findMissingPartitions/one_partition (0.00s) --- PASS: Test_findMissingPartitions/two_partitions (0.00s) --- PASS: Test_findMissingPartitions/four_partitions,_missing_last_two (0.00s) --- PASS: Test_findMissingPartitions/four_partitions,_missing_first_two (0.00s) --- PASS: Test_findMissingPartitions/four_partitions,_missing_middle_two (0.00s) --- PASS: Test_findMissingPartitions/four_partitions,_missing_three (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/mq/pub_balancer 0.011s === RUN TestEnumScalarType === RUN TestEnumScalarType/Boolean === RUN TestEnumScalarType/Integer === RUN TestEnumScalarType/Long === RUN TestEnumScalarType/Float === RUN TestEnumScalarType/Double === RUN TestEnumScalarType/Bytes === RUN TestEnumScalarType/String --- PASS: TestEnumScalarType (0.00s) --- PASS: TestEnumScalarType/Boolean (0.00s) --- PASS: TestEnumScalarType/Integer (0.00s) --- PASS: TestEnumScalarType/Long (0.00s) --- PASS: TestEnumScalarType/Float (0.00s) --- PASS: TestEnumScalarType/Double (0.00s) --- PASS: TestEnumScalarType/Bytes (0.00s) --- PASS: TestEnumScalarType/String (0.00s) === RUN TestField --- PASS: TestField (0.00s) === RUN TestRecordType fields: < name: "field_key" field_index: 1 type: < scalar_type: INT32 > > fields: < name: "field_record" field_index: 2 type: < record_type: < fields: < name: "field_1" field_index: 1 type: < scalar_type: INT32 > > fields: < name: "field_2" field_index: 2 type: < scalar_type: STRING > > > > > {"fields":[{"name":"field_key","field_index":1,"type":{"Kind":{"ScalarType":1}}},{"name":"field_record","field_index":2,"type":{"Kind":{"RecordType":{"fields":[{"name":"field_1","field_index":1,"type":{"Kind":{"ScalarType":1}}},{"name":"field_2","field_index":2,"type":{"Kind":{"ScalarType":7}}}]}}}}]} --- PASS: TestRecordType (0.00s) === RUN TestStructToSchema === RUN TestStructToSchema/scalar_type === RUN TestStructToSchema/simple_struct_type === RUN TestStructToSchema/simple_list === RUN TestStructToSchema/simple_[]byte === RUN TestStructToSchema/nested_simpe_structs === RUN TestStructToSchema/nested_struct_type --- PASS: TestStructToSchema (0.00s) --- PASS: TestStructToSchema/scalar_type (0.00s) --- PASS: TestStructToSchema/simple_struct_type (0.00s) --- PASS: TestStructToSchema/simple_list (0.00s) --- PASS: TestStructToSchema/simple_[]byte (0.00s) --- PASS: TestStructToSchema/nested_simpe_structs (0.00s) --- PASS: TestStructToSchema/nested_struct_type (0.00s) === RUN TestToParquetLevels === RUN TestToParquetLevels/nested_type --- PASS: TestToParquetLevels (0.00s) --- PASS: TestToParquetLevels/nested_type (0.00s) === RUN TestWriteReadParquet RecordType: fields:{name:"Address" type:{record_type:{fields:{name:"City" type:{scalar_type:STRING}} fields:{name:"Street" type:{scalar_type:STRING}}}}} fields:{name:"Company" type:{scalar_type:STRING}} fields:{name:"CreatedAt" type:{scalar_type:INT64}} fields:{name:"ID" type:{scalar_type:INT64}} fields:{name:"Person" type:{record_type:{fields:{name:"emails" type:{list_type:{element_type:{scalar_type:STRING}}}} fields:{name:"zName" type:{scalar_type:STRING}}}}} ParquetSchema: message example { optional group Address { optional binary City; optional binary Street; } optional binary Company; optional int64 CreatedAt; optional int64 ID; optional group Person { repeated binary emails; optional binary zName; } } Go Type: struct { Address *struct { City *[]uint8; Street *[]uint8 }; Company *[]uint8; CreatedAt *int64; ID *int64; Person *struct { Emails []*[]uint8; ZName *[]uint8 } } Write RecordValue: fields:{key:"Company" value:{string_value:"company_0"}} fields:{key:"CreatedAt" value:{int64_value:2}} fields:{key:"ID" value:{int64_value:1}} fields:{key:"Person" value:{record_value:{fields:{key:"emails" value:{list_value:{values:{string_value:"john_0@a.com"} values:{string_value:"john_0@b.com"} values:{string_value:"john_0@c.com"} values:{string_value:"john_0@d.com"} values:{string_value:"john_0@e.com"}}}} fields:{key:"zName" value:{string_value:"john_0"}}}}} Build Row: [C:0 D:0 R:0 V: C:1 D:0 R:0 V: C:2 D:1 R:0 V:company_0 C:3 D:1 R:0 V:2 C:4 D:1 R:0 V:1 C:5 D:2 R:0 V:john_0@a.com C:5 D:2 R:1 V:john_0@b.com C:5 D:2 R:1 V:john_0@c.com C:5 D:2 R:1 V:john_0@d.com C:5 D:2 R:1 V:john_0@e.com C:6 D:2 R:0 V:john_0] Write RecordValue: fields:{key:"Company" value:{string_value:"company_1"}} fields:{key:"CreatedAt" value:{int64_value:4}} fields:{key:"ID" value:{int64_value:2}} fields:{key:"Person" value:{record_value:{fields:{key:"emails" value:{list_value:{values:{string_value:"john_1@a.com"} values:{string_value:"john_1@b.com"} values:{string_value:"john_1@c.com"} values:{string_value:"john_1@d.com"} values:{string_value:"john_1@e.com"}}}} fields:{key:"zName" value:{string_value:"john_1"}}}}} Build Row: [C:0 D:0 R:0 V: C:1 D:0 R:0 V: C:2 D:1 R:0 V:company_1 C:3 D:1 R:0 V:4 C:4 D:1 R:0 V:2 C:5 D:2 R:0 V:john_1@a.com C:5 D:2 R:1 V:john_1@b.com C:5 D:2 R:1 V:john_1@c.com C:5 D:2 R:1 V:john_1@d.com C:5 D:2 R:1 V:john_1@e.com C:6 D:2 R:0 V:john_1] Write RecordValue: fields:{key:"Company" value:{string_value:"company_2"}} fields:{key:"CreatedAt" value:{int64_value:6}} fields:{key:"ID" value:{int64_value:3}} fields:{key:"Person" value:{record_value:{fields:{key:"emails" value:{list_value:{values:{string_value:"john_2@a.com"} values:{string_value:"john_2@b.com"} values:{string_value:"john_2@c.com"} values:{string_value:"john_2@d.com"} values:{string_value:"john_2@e.com"}}}} fields:{key:"zName" value:{string_value:"john_2"}}}}} Build Row: [C:0 D:0 R:0 V: C:1 D:0 R:0 V: C:2 D:1 R:0 V:company_2 C:3 D:1 R:0 V:6 C:4 D:1 R:0 V:3 C:5 D:2 R:0 V:john_2@a.com C:5 D:2 R:1 V:john_2@b.com C:5 D:2 R:1 V:john_2@c.com C:5 D:2 R:1 V:john_2@d.com C:5 D:2 R:1 V:john_2@e.com C:6 D:2 R:0 V:john_2] Read RecordValue: fields:{key:"Address" value:{record_value:{fields:{key:"City" value:{string_value:""}} fields:{key:"Street" value:{string_value:""}}}}} fields:{key:"Company" value:{string_value:"company_0"}} fields:{key:"CreatedAt" value:{int64_value:2}} fields:{key:"ID" value:{int64_value:1}} fields:{key:"Person" value:{record_value:{fields:{key:"emails" value:{list_value:{values:{string_value:"john_0@a.com"} values:{string_value:"john_0@b.com"} values:{string_value:"john_0@c.com"} values:{string_value:"john_0@d.com"} values:{string_value:"john_0@e.com"}}}} fields:{key:"zName" value:{string_value:"john_0"}}}}} Read RecordValue: fields:{key:"Address" value:{record_value:{fields:{key:"City" value:{string_value:""}} fields:{key:"Street" value:{string_value:""}}}}} fields:{key:"Company" value:{string_value:"company_1"}} fields:{key:"CreatedAt" value:{int64_value:4}} fields:{key:"ID" value:{int64_value:2}} fields:{key:"Person" value:{record_value:{fields:{key:"emails" value:{list_value:{values:{string_value:"john_1@a.com"} values:{string_value:"john_1@b.com"} values:{string_value:"john_1@c.com"} values:{string_value:"john_1@d.com"} values:{string_value:"john_1@e.com"}}}} fields:{key:"zName" value:{string_value:"john_1"}}}}} Read RecordValue: fields:{key:"Address" value:{record_value:{fields:{key:"City" value:{string_value:""}} fields:{key:"Street" value:{string_value:""}}}}} fields:{key:"Company" value:{string_value:"company_2"}} fields:{key:"CreatedAt" value:{int64_value:6}} fields:{key:"ID" value:{int64_value:3}} fields:{key:"Person" value:{record_value:{fields:{key:"emails" value:{list_value:{values:{string_value:"john_2@a.com"} values:{string_value:"john_2@b.com"} values:{string_value:"john_2@c.com"} values:{string_value:"john_2@d.com"} values:{string_value:"john_2@e.com"}}}} fields:{key:"zName" value:{string_value:"john_2"}}}}} total: 3 --- PASS: TestWriteReadParquet (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/mq/schema 0.008s === RUN TestMessageSerde serialized size 368 --- PASS: TestMessageSerde (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/mq/segment 0.002s === RUN TestRingBuffer --- PASS: TestRingBuffer (0.00s) === RUN TestInflightMessageTracker --- PASS: TestInflightMessageTracker (0.00s) === RUN TestInflightMessageTracker2 --- PASS: TestInflightMessageTracker2 (0.00s) === RUN TestInflightMessageTracker3 --- PASS: TestInflightMessageTracker3 (0.00s) === RUN TestInflightMessageTracker4 --- PASS: TestInflightMessageTracker4 (0.00s) === RUN TestAddConsumerInstance &{isAssign:true partition:{RangeStart:0 RangeStop:1 RingSize:3 UnixTimeNs:0} consumer:first ts:{wall:13981367895428301153 ext:506995988 loc:0x1ab6580}} &{isAssign:true partition:{RangeStart:1 RangeStop:2 RingSize:3 UnixTimeNs:0} consumer:first ts:{wall:13981367895428304660 ext:506999495 loc:0x1ab6580}} --- PASS: TestAddConsumerInstance (1.00s) === RUN TestMultipleConsumerInstances &{isAssign:true partition:{RangeStart:2 RangeStop:3 RingSize:3 UnixTimeNs:0} consumer:third ts:{wall:13981367896507281301 ext:1512234302 loc:0x1ab6580}} &{isAssign:true partition:{RangeStart:0 RangeStop:1 RingSize:3 UnixTimeNs:0} consumer:first ts:{wall:13981367896507284336 ext:1512237337 loc:0x1ab6580}} &{isAssign:true partition:{RangeStart:1 RangeStop:2 RingSize:3 UnixTimeNs:0} consumer:second ts:{wall:13981367896507285268 ext:1512238279 loc:0x1ab6580}} --- PASS: TestMultipleConsumerInstances (1.00s) === RUN TestConfirmAdjustment &{isAssign:true partition:{RangeStart:0 RangeStop:1 RingSize:3 UnixTimeNs:0} consumer:second ts:{wall:13981367897583584687 ext:2514795874 loc:0x1ab6580}} &{isAssign:true partition:{RangeStart:1 RangeStop:2 RingSize:3 UnixTimeNs:0} consumer:third ts:{wall:13981367897583606127 ext:2514817314 loc:0x1ab6580}} &{isAssign:true partition:{RangeStart:2 RangeStop:3 RingSize:3 UnixTimeNs:0} consumer:first ts:{wall:13981367897583608231 ext:2514819418 loc:0x1ab6580}} &{isAssign:true partition:{RangeStart:1 RangeStop:2 RingSize:3 UnixTimeNs:0} consumer:first ts:{wall:13981367899731052659 ext:4514780188 loc:0x1ab6580}} --- PASS: TestConfirmAdjustment (5.00s) === RUN Test_doBalanceSticky === RUN Test_doBalanceSticky/1_consumer_instance,_1_partition === RUN Test_doBalanceSticky/2_consumer_instances,_1_partition === RUN Test_doBalanceSticky/1_consumer_instance,_2_partitions === RUN Test_doBalanceSticky/2_consumer_instances,_2_partitions === RUN Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_deleted_consumer_instance === RUN Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_new_consumer_instance === RUN Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_new_partition === RUN Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_new_partition,_1_new_consumer_instance --- PASS: Test_doBalanceSticky (0.00s) --- PASS: Test_doBalanceSticky/1_consumer_instance,_1_partition (0.00s) --- PASS: Test_doBalanceSticky/2_consumer_instances,_1_partition (0.00s) --- PASS: Test_doBalanceSticky/1_consumer_instance,_2_partitions (0.00s) --- PASS: Test_doBalanceSticky/2_consumer_instances,_2_partitions (0.00s) --- PASS: Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_deleted_consumer_instance (0.00s) --- PASS: Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_new_consumer_instance (0.00s) --- PASS: Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_new_partition (0.00s) --- PASS: Test_doBalanceSticky/2_consumer_instances,_2_partitions,_1_new_partition,_1_new_consumer_instance (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/mq/sub_coordinator 7.018s ? github.com/seaweedfs/seaweedfs/weed/mq/topic [no test files] ? github.com/seaweedfs/seaweedfs/weed/notification [no test files] ? github.com/seaweedfs/seaweedfs/weed/notification/aws_sqs [no test files] ? github.com/seaweedfs/seaweedfs/weed/notification/gocdk_pub_sub [no test files] ? github.com/seaweedfs/seaweedfs/weed/notification/google_pub_sub [no test files] ? github.com/seaweedfs/seaweedfs/weed/notification/kafka [no test files] ? github.com/seaweedfs/seaweedfs/weed/notification/log [no test files] === RUN TestCaching vid 123 locations = [{a.com:8080 0}] --- PASS: TestCaching (2.01s) === RUN TestCreateNeedleFromRequest needle: 0f084d17353afda0 Size:0, DataSize:0, Name:t.txt, Mime:text/plain; charset=utf-8 Compressed:true, originalSize: 1422 W0603 08:55:54.904547 upload_content.go:190 uploading 0 to http://localhost:8080/389,0f084d17353afda0: upload t.txt 803 bytes to http://localhost:8080/389,0f084d17353afda0: EOF needle: 0f084d17353afda0 Size:0, DataSize:0, Name:t.txt, Mime:text/plain; charset=utf-8 Compressed:true, originalSize: 1422 W0603 08:55:55.379056 upload_content.go:190 uploading 1 to http://localhost:8080/389,0f084d17353afda0: upload t.txt 803 bytes to http://localhost:8080/389,0f084d17353afda0: EOF needle: 0f084d17353afda0 Size:0, DataSize:0, Name:t.txt, Mime:text/plain; charset=utf-8 Compressed:true, originalSize: 1422 W0603 08:55:56.092284 upload_content.go:190 uploading 2 to http://localhost:8080/389,0f084d17353afda0: upload t.txt 803 bytes to http://localhost:8080/389,0f084d17353afda0: EOF err: upload t.txt 803 bytes to http://localhost:8080/389,0f084d17353afda0: EOF uploadResult: needle: 0f084d17353afda0 Size:0, DataSize:0, Name:t.txt, Mime:text/plain Compressed:true, dataSize:803 originalSize:1422 W0603 08:55:56.092368 upload_content.go:190 uploading 0 to http://localhost:8080/389,0f084d17353afda0: upload t.txt 803 bytes to http://localhost:8080/389,0f084d17353afda0: EOF needle: 0f084d17353afda0 Size:0, DataSize:0, Name:t.txt, Mime:text/plain Compressed:true, dataSize:803 originalSize:1422 W0603 08:55:56.568957 upload_content.go:190 uploading 1 to http://localhost:8080/389,0f084d17353afda0: upload t.txt 803 bytes to http://localhost:8080/389,0f084d17353afda0: EOF needle: 0f084d17353afda0 Size:0, DataSize:0, Name:t.txt, Mime:text/plain Compressed:true, dataSize:803 originalSize:1422 W0603 08:55:57.282278 upload_content.go:190 uploading 2 to http://localhost:8080/389,0f084d17353afda0: upload t.txt 803 bytes to http://localhost:8080/389,0f084d17353afda0: EOF --- PASS: TestCreateNeedleFromRequest (2.38s) PASS ok github.com/seaweedfs/seaweedfs/weed/operation 4.400s === RUN TestJsonpMarshalUnmarshal marshalled: { "backendType": "aws", "backendId": "", "key": "", "offset": "0", "fileSize": "12", "modifiedTime": "0", "extension": "" } unmarshalled: backend_type:"aws" backend_id:"temp" file_size:12 --- PASS: TestJsonpMarshalUnmarshal (0.00s) === RUN TestServerAddresses_ToAddressMapOrSrv_shouldRemovePrefix --- PASS: TestServerAddresses_ToAddressMapOrSrv_shouldRemovePrefix (0.00s) === RUN TestServerAddresses_ToAddressMapOrSrv_shouldHandleIPPortList --- PASS: TestServerAddresses_ToAddressMapOrSrv_shouldHandleIPPortList (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/pb 0.004s === RUN TestFileIdSize 24 14 --- PASS: TestFileIdSize (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/pb/filer_pb 0.008s ? github.com/seaweedfs/seaweedfs/weed/pb/iam_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/master_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/message_fbs [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/mount_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/mq_agent_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/mq_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/remote_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/s3_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/schema_pb [no test files] ? github.com/seaweedfs/seaweedfs/weed/pb/volume_server_pb [no test files] === RUN TestGjson { "quiz": { "sport": { "q1": { "question": "Which one is correct team name in NBA?", "options": [ "New York Bulls", "Los Angeles Kings", "Golden State Warriros", "Huston Rocket" ], "answer": "Huston Rocket" } }, "maths": { "q1": { "question": "5 + 7 = ?", "options": [ "10", "11", "12", "13" ], "answer": "12" }, "q2": { "question": "12 - 8 = ?", "options": [ "1", "2", "3", "4" ], "answer": "4" } } } } +++++++++++ 12 5 { "sport": { "q1": { "question": "Which one is correct team name in NBA?", "options": [ "New York Bulls", "Los Angeles Kings", "Golden State Warriros", "Huston Rocket" ], "answer": "Huston Rocket" } }, "maths": { "q1": { "question": "5 + 7 = ?", "options": [ "10", "11", "12", "13" ], "answer": "12" }, "q2": { "question": "12 - 8 = ?", "options": [ "1", "2", "3", "4" ], "answer": "4" } } } 0 0 ----------- { "fruit": "Apple", "size": "Large", "quiz": "Red" } +++++++++++ 51 3 Red 13 3 Apple ----------- --- PASS: TestGjson (0.00s) === RUN TestJsonQueryRow {fruit:"Bl\"ue",size:6} --- PASS: TestJsonQueryRow (0.00s) === RUN TestJsonQueryNumber {fruit:"Bl\"ue",quiz:"green"} --- PASS: TestJsonQueryNumber (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/query/json 0.002s ? github.com/seaweedfs/seaweedfs/weed/query/sqltypes [no test files] ? github.com/seaweedfs/seaweedfs/weed/remote_storage [no test files] ? github.com/seaweedfs/seaweedfs/weed/remote_storage/azure [no test files] ? github.com/seaweedfs/seaweedfs/weed/remote_storage/gcs [no test files] ? github.com/seaweedfs/seaweedfs/weed/remote_storage/s3 [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/repl_util [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sink [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sink/azuresink [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sink/b2sink [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sink/filersink [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sink/gcssink [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sink/localsink [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sink/s3sink [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/source [no test files] ? github.com/seaweedfs/seaweedfs/weed/replication/sub [no test files] === RUN TestIdentityListFileFormat { "identities": [ { "name": "some_name", "credentials": [ { "accessKey": "some_access_key1", "secretKey": "some_secret_key2" } ], "actions": [ "Admin", "Read", "Write" ], "account": null }, { "name": "some_read_only_user", "credentials": [ { "accessKey": "some_access_key1", "secretKey": "some_secret_key1" } ], "actions": [ "Read" ], "account": null }, { "name": "some_normal_user", "credentials": [ { "accessKey": "some_access_key2", "secretKey": "some_secret_key2" } ], "actions": [ "Read", "Write" ], "account": null } ], "accounts": [] } --- PASS: TestIdentityListFileFormat (0.00s) === RUN TestCanDo --- PASS: TestCanDo (0.00s) === RUN TestLoadS3ApiConfiguration --- PASS: TestLoadS3ApiConfiguration (0.00s) === RUN TestIsRequestPresignedSignatureV4 --- PASS: TestIsRequestPresignedSignatureV4 (0.00s) === RUN TestIsReqAuthenticated --- PASS: TestIsReqAuthenticated (0.00s) === RUN TestCheckaAnonymousRequestAuthType --- PASS: TestCheckaAnonymousRequestAuthType (0.00s) === RUN TestCheckAdminRequestAuthType --- PASS: TestCheckAdminRequestAuthType (0.00s) === RUN TestGetStringToSignPUT --- PASS: TestGetStringToSignPUT (0.00s) === RUN TestGetStringToSignGETEmptyStringHash --- PASS: TestGetStringToSignGETEmptyStringHash (0.00s) === RUN TestBuildBucketMetadata W0603 08:55:54.930823 bucket_metadata.go:106 Invalid ownership: , bucket: ownershipEmptyStr W0603 08:55:54.931070 bucket_metadata.go:117 owner[id=xxxxx] is invalid, bucket: acpEmptyObject --- PASS: TestBuildBucketMetadata (0.00s) === RUN TestGetBucketMetadata --- PASS: TestGetBucketMetadata (1.00s) === RUN TestNewSignV4ChunkedReaderstreamingAws4HmacSha256Payload --- PASS: TestNewSignV4ChunkedReaderstreamingAws4HmacSha256Payload (0.00s) === RUN TestNewSignV4ChunkedReaderStreamingUnsignedPayloadTrailer --- PASS: TestNewSignV4ChunkedReaderStreamingUnsignedPayloadTrailer (0.00s) === RUN TestInitiateMultipartUploadResult --- PASS: TestInitiateMultipartUploadResult (0.00s) === RUN TestListPartsResult --- PASS: TestListPartsResult (0.00s) === RUN Test_parsePartNumber === RUN Test_parsePartNumber/first === RUN Test_parsePartNumber/second --- PASS: Test_parsePartNumber (0.00s) --- PASS: Test_parsePartNumber/first (0.00s) --- PASS: Test_parsePartNumber/second (0.00s) === RUN TestGetAccountId --- PASS: TestGetAccountId (0.00s) === RUN TestExtractAcl --- PASS: TestExtractAcl (0.00s) === RUN TestParseAndValidateAclHeaders W0603 08:55:55.932884 s3api_acl_helper.go:292 invalid canonical grantee! account id[notExistsAccount] is not exists W0603 08:55:55.932889 s3api_acl_helper.go:281 invalid group grantee! group name[http:sfasf] is not valid --- PASS: TestParseAndValidateAclHeaders (0.00s) === RUN TestDetermineReqGrants --- PASS: TestDetermineReqGrants (0.00s) === RUN TestAssembleEntryWithAcp --- PASS: TestAssembleEntryWithAcp (0.00s) === RUN TestGrantEquals --- PASS: TestGrantEquals (0.00s) === RUN TestSetAcpOwnerHeader --- PASS: TestSetAcpOwnerHeader (0.00s) === RUN TestSetAcpGrantsHeader --- PASS: TestSetAcpGrantsHeader (0.00s) === RUN TestListBucketsHandler --- PASS: TestListBucketsHandler (0.00s) === RUN TestLimit --- PASS: TestLimit (0.00s) === RUN TestProcessMetadata --- PASS: TestProcessMetadata (0.00s) === RUN TestProcessMetadataBytes --- PASS: TestProcessMetadataBytes (0.00s) === RUN TestListObjectsHandler --- PASS: TestListObjectsHandler (0.00s) === RUN Test_normalizePrefixMarker === RUN Test_normalizePrefixMarker/prefix_is_a_directory === RUN Test_normalizePrefixMarker/normal_case === RUN Test_normalizePrefixMarker/empty_prefix === RUN Test_normalizePrefixMarker/empty_directory --- PASS: Test_normalizePrefixMarker (0.00s) --- PASS: Test_normalizePrefixMarker/prefix_is_a_directory (0.00s) --- PASS: Test_normalizePrefixMarker/normal_case (0.00s) --- PASS: Test_normalizePrefixMarker/empty_prefix (0.00s) --- PASS: Test_normalizePrefixMarker/empty_directory (0.00s) === RUN TestRemoveDuplicateSlashes === RUN TestRemoveDuplicateSlashes/empty === RUN TestRemoveDuplicateSlashes/slash === RUN TestRemoveDuplicateSlashes/object === RUN TestRemoveDuplicateSlashes/correct_path === RUN TestRemoveDuplicateSlashes/path_with_duplicates --- PASS: TestRemoveDuplicateSlashes (0.00s) --- PASS: TestRemoveDuplicateSlashes/empty (0.00s) --- PASS: TestRemoveDuplicateSlashes/slash (0.00s) --- PASS: TestRemoveDuplicateSlashes/object (0.00s) --- PASS: TestRemoveDuplicateSlashes/correct_path (0.00s) --- PASS: TestRemoveDuplicateSlashes/path_with_duplicates (0.00s) === RUN TestS3ApiServer_toFilerUrl === RUN TestS3ApiServer_toFilerUrl/simple === RUN TestS3ApiServer_toFilerUrl/double_prefix === RUN TestS3ApiServer_toFilerUrl/triple_prefix === RUN TestS3ApiServer_toFilerUrl/empty_prefix --- PASS: TestS3ApiServer_toFilerUrl (0.00s) --- PASS: TestS3ApiServer_toFilerUrl/simple (0.00s) --- PASS: TestS3ApiServer_toFilerUrl/double_prefix (0.00s) --- PASS: TestS3ApiServer_toFilerUrl/triple_prefix (0.00s) --- PASS: TestS3ApiServer_toFilerUrl/empty_prefix (0.00s) === RUN TestCopyObjectResponse 2025-06-03T08:55:55.935567588Z12345678 --- PASS: TestCopyObjectResponse (0.00s) === RUN TestCopyPartResponse 2025-06-03T08:55:55.935597825Z12345678 --- PASS: TestCopyPartResponse (0.00s) === RUN TestXMLUnmarshall --- PASS: TestXMLUnmarshall (0.00s) === RUN TestXMLMarshall --- PASS: TestXMLMarshall (0.00s) === RUN TestValidateTags --- PASS: TestValidateTags (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/s3api 1.027s === RUN TestPostPolicyForm --- PASS: TestPostPolicyForm (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/s3api/policy 0.006s ? github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants [no test files] === RUN Test_verifyBucketName --- PASS: Test_verifyBucketName (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/s3api/s3bucket 0.003s ? github.com/seaweedfs/seaweedfs/weed/s3api/s3err [no test files] ? github.com/seaweedfs/seaweedfs/weed/security [no test files] === RUN TestSequencer I0603 08:55:54.916062 snowflake_sequencer.go:21 use snowflake seq id generator, nodeid:for_test hex_of_nodeid: 1 1ac81d0658c01000 1ac81d0658c01001 1ac81d0658c01002 1ac81d0658c01003 1ac81d0658c01004 1ac81d0658c01005 1ac81d0658c01006 1ac81d0658c01007 1ac81d0658c01008 1ac81d0658c01009 1ac81d0658c0100a 1ac81d0658c0100b 1ac81d0658c0100c 1ac81d0658c0100d 1ac81d0658c0100e 1ac81d0658c0100f 1ac81d0658c01010 1ac81d0658c01011 1ac81d0658c01012 1ac81d0658c01013 1ac81d0658c01014 1ac81d0658c01015 1ac81d0658c01016 1ac81d0658c01017 1ac81d0658c01018 1ac81d0658c01019 1ac81d0658c0101a 1ac81d0658c0101b 1ac81d0658c0101c 1ac81d0658c0101d 1ac81d0658c0101e 1ac81d0658c0101f 1ac81d0658c01020 1ac81d0658c01021 1ac81d0658c01022 1ac81d0658c01023 1ac81d0658c01024 1ac81d0658c01025 1ac81d0658c01026 1ac81d0658c01027 1ac81d0658c01028 1ac81d0658c01029 1ac81d0658c0102a 1ac81d0658c0102b 1ac81d0658c0102c 1ac81d0658c0102d 1ac81d0658c0102e 1ac81d0658c0102f 1ac81d0658c01030 1ac81d0658c01031 1ac81d0658c01032 1ac81d0658c01033 1ac81d0658c01034 1ac81d0658c01035 1ac81d0658c01036 1ac81d0658c01037 1ac81d0658c01038 1ac81d0658c01039 1ac81d0658c0103a 1ac81d0658c0103b 1ac81d0658c0103c 1ac81d0658c0103d 1ac81d0658c0103e 1ac81d0658c0103f 1ac81d0658c01040 1ac81d0658c01041 1ac81d0658c01042 1ac81d0658c01043 1ac81d0658c01044 1ac81d0658c01045 1ac81d0658c01046 1ac81d0658c01047 1ac81d0658c01048 1ac81d0658c01049 1ac81d0658c0104a 1ac81d0658c0104b 1ac81d0658c0104c 1ac81d0658c0104d 1ac81d0658c0104e 1ac81d0658c0104f 1ac81d0658c01050 1ac81d0658c01051 1ac81d0658c01052 1ac81d0658c01053 1ac81d0658c01054 1ac81d0658c01055 1ac81d0658c01056 1ac81d0658c01057 1ac81d0658c01058 1ac81d0658c01059 1ac81d0658c0105a 1ac81d0658c0105b 1ac81d0658c0105c 1ac81d0658c0105d 1ac81d0658c0105e 1ac81d0658c0105f 1ac81d0658c01060 1ac81d0658c01061 1ac81d0658c01062 1ac81d0658c01063 --- PASS: TestSequencer (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/sequence 0.007s === RUN TestParseURL --- PASS: TestParseURL (0.00s) === RUN TestPtrie matched1 /topics/abc matched1 /topics/abc/d matched2 /topics/abc matched2 /topics/abc/d --- PASS: TestPtrie (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/server 0.021s ? github.com/seaweedfs/seaweedfs/weed/server/constants [no test files] === RUN TestToBreadcrumb === RUN TestToBreadcrumb/empty === RUN TestToBreadcrumb/test1 === RUN TestToBreadcrumb/test2 === RUN TestToBreadcrumb/test3 --- PASS: TestToBreadcrumb (0.00s) --- PASS: TestToBreadcrumb/empty (0.00s) --- PASS: TestToBreadcrumb/test1 (0.00s) --- PASS: TestToBreadcrumb/test2 (0.00s) --- PASS: TestToBreadcrumb/test3 (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/server/filer_ui 0.007s ? github.com/seaweedfs/seaweedfs/weed/server/master_ui [no test files] ? github.com/seaweedfs/seaweedfs/weed/server/volume_server_ui [no test files] ? github.com/seaweedfs/seaweedfs/weed/sftpd [no test files] ? github.com/seaweedfs/seaweedfs/weed/sftpd/auth [no test files] ? github.com/seaweedfs/seaweedfs/weed/sftpd/user [no test files] ? github.com/seaweedfs/seaweedfs/weed/sftpd/utils [no test files] === RUN TestCollectCollectionsForVolumeIds --- PASS: TestCollectCollectionsForVolumeIds (0.00s) === RUN TestParseReplicaPlacementArg using master default replica placement "123" for EC volumes using replica placement "021" for EC volumes --- PASS: TestParseReplicaPlacementArg (0.00s) === RUN TestEcDistribution => 192.168.1.5:8080 27010 => 192.168.1.6:8080 17420 => 192.168.1.1:8080 17330 => 192.168.1.4:8080 1900 => 192.168.1.2:8080 1540 --- PASS: TestEcDistribution (0.00s) === RUN TestPickRackToBalanceShardsInto --- PASS: TestPickRackToBalanceShardsInto (0.00s) === RUN TestPickEcNodeToBalanceShardsInto --- PASS: TestPickEcNodeToBalanceShardsInto (0.00s) === RUN TestCountFreeShardSlots === RUN TestCountFreeShardSlots/topology_#1,_free_HDD_shards === RUN TestCountFreeShardSlots/topology_#1,_no_free_SSD_shards_available === RUN TestCountFreeShardSlots/topology_#2,_no_negative_free_HDD_shards === RUN TestCountFreeShardSlots/topology_#2,_no_free_SSD_shards_available --- PASS: TestCountFreeShardSlots (0.00s) --- PASS: TestCountFreeShardSlots/topology_#1,_free_HDD_shards (0.00s) --- PASS: TestCountFreeShardSlots/topology_#1,_no_free_SSD_shards_available (0.00s) --- PASS: TestCountFreeShardSlots/topology_#2,_no_negative_free_HDD_shards (0.00s) --- PASS: TestCountFreeShardSlots/topology_#2,_no_free_SSD_shards_available (0.00s) === RUN TestCommandEcBalanceSmall balanceEcVolumes c1 dn1 moves ec shard 1.5 to dn2 dn1 moves ec shard 1.6 to dn2 dn1 moves ec shard 1.0 to dn2 dn1 moves ec shard 1.1 to dn2 dn1 moves ec shard 1.2 to dn2 dn1 moves ec shard 1.3 to dn2 dn1 moves ec shard 1.4 to dn2 dn2 moves ec shard 2.6 to dn1 dn2 moves ec shard 2.0 to dn1 dn2 moves ec shard 2.1 to dn1 dn2 moves ec shard 2.2 to dn1 dn2 moves ec shard 2.3 to dn1 dn2 moves ec shard 2.4 to dn1 dn2 moves ec shard 2.5 to dn1 --- PASS: TestCommandEcBalanceSmall (0.00s) === RUN TestCommandEcBalanceNothingToMove balanceEcVolumes c1 --- PASS: TestCommandEcBalanceNothingToMove (0.00s) === RUN TestCommandEcBalanceAddNewServers balanceEcVolumes c1 --- PASS: TestCommandEcBalanceAddNewServers (0.00s) === RUN TestCommandEcBalanceAddNewRacks balanceEcVolumes c1 dn2 moves ec shard 1.8 to dn4 dn1 moves ec shard 1.1 to dn3 dn1 moves ec shard 1.2 to dn3 dn2 moves ec shard 1.9 to dn4 dn2 moves ec shard 1.10 to dn4 dn1 moves ec shard 1.0 to dn3 dn2 moves ec shard 1.7 to dn4 dn1 moves ec shard 2.8 to dn4 dn1 moves ec shard 2.9 to dn3 dn2 moves ec shard 2.2 to dn3 dn2 moves ec shard 2.3 to dn4 dn1 moves ec shard 2.7 to dn4 dn2 moves ec shard 2.0 to dn3 dn2 moves ec shard 2.1 to dn3 --- PASS: TestCommandEcBalanceAddNewRacks (0.00s) === RUN TestCommandEcBalanceVolumeEvenButRackUneven balanceEcVolumes c1 dn_shared moves ec shards 1.0 to dn3 --- PASS: TestCommandEcBalanceVolumeEvenButRackUneven (0.00s) === RUN TestCircuitBreakerShell --- PASS: TestCircuitBreakerShell (0.00s) === RUN TestIsGoodMove replication: 100 expected false name: test 100 move to wrong data centers replication: 100 expected true name: test 100 move to spread into proper data centers replication: 001 expected false name: test move to the same node replication: 001 expected false name: test move to the same rack, but existing node replication: 001 expected true name: test move to the same rack, a new node replication: 010 expected false name: test 010 move all to the same rack replication: 010 expected true name: test 010 move to spread racks replication: 010 expected true name: test 010 move to spread racks replication: 011 expected true name: test 011 switch which rack has more replicas replication: 011 expected true name: test 011 move the lonely replica to another racks replication: 011 expected false name: test 011 move to wrong racks replication: 011 expected false name: test 011 move all to the same rack --- PASS: TestIsGoodMove (0.00s) === RUN TestBalance hdd 0.10 0.21:0.06 moving volume 31 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.20:0.06 moving volume 29 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.20:0.06 moving volume 30 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.20:0.06 moving volume 27 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.19:0.06 moving volume 28 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.19:0.06 moving volume collection4_7 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.19:0.06 moving volume collection0_25 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.18:0.06 moving volume collection3_9 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.18:0.06 moving volume collection1_80 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.18:0.06 moving volume collection1_69 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.18:0.06 moving volume 4 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.17:0.06 moving volume collection1_84 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.17:0.07 moving volume 2 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.17:0.07 moving volume collection1_63 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.17:0.07 moving volume 6 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.17:0.07 moving volume collection1_74 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.16:0.07 moving volume 3 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.16:0.07 moving volume collection1_85 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.16:0.07 moving volume collection1_54 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.16:0.07 moving volume collection1_81 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.15:0.07 moving volume collection1_97 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.15:0.07 moving volume collection1_56 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.15:0.07 moving volume collection1_174 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.15:0.07 moving volume collection2_380 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.15:0.07 moving volume collection1_105 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.14:0.07 moving volume collection1_215 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.14:0.07 moving volume collection0_24 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.14:0.07 moving volume collection1_173 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.14:0.07 moving volume collection1_107 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.07 moving volume 5 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_136 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_238 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_240 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection0_26 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_167 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_66 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_65 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_57 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_62 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_67 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_138 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_70 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_90 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_72 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_71 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_75 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_58 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_177 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.08 moving volume collection1_87 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.13:0.09 moving volume collection1_73 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_77 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_116 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_83 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_91 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_79 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_64 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_61 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_76 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_59 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_139 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_96 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_144 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_95 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_92 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_86 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_60 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.09 moving volume collection1_55 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection2_379 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection1_94 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection1_82 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection1_128 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection1_89 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection1_53 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection2_357 192.168.1.2:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection1_99 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.12:0.10 moving volume collection1_111 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume collection1_176 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume collection4_7 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection3_9 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_169 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume 1 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_197 192.168.1.4:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume 4 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume 2 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_126 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume collection2_381 192.168.1.2:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_165 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume 6 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume 3 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_232 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume collection0_25 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection2_345 192.168.1.4:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_135 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume collection1_68 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_117 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume collection1_74 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection2_378 192.168.1.1:8080 => 192.168.1.5:8080 hdd 0.10 0.11:0.10 moving volume collection1_194 192.168.1.1:8080 => 192.168.1.6:8080 hdd 0.10 0.11:0.10 moving volume collection1_179 192.168.1.2:8080 => 192.168.1.5:8080 --- PASS: TestBalance (0.00s) === RUN TestVolumeSelection collect volumes quiet for: 0 seconds --- PASS: TestVolumeSelection (0.00s) === RUN TestDeleteEmptySelection --- PASS: TestDeleteEmptySelection (0.00s) === RUN TestShouldSkipVolume --- PASS: TestShouldSkipVolume (0.00s) === RUN TestSatisfyReplicaPlacementComplicated replication: 100 expected false name: test 100 negative replication: 100 expected true name: test 100 positive replication: 022 expected true name: test 022 positive replication: 022 expected false name: test 022 negative replication: 210 expected true name: test 210 moved from 200 positive replication: 210 expected false name: test 210 moved from 200 negative extra dc replication: 210 expected false name: test 210 moved from 200 negative extra data node --- PASS: TestSatisfyReplicaPlacementComplicated (0.00s) === RUN TestSatisfyReplicaPlacement01x replication: 011 expected true name: test 011 same existing rack replication: 011 expected false name: test 011 negative replication: 011 expected true name: test 011 different existing racks replication: 011 expected false name: test 011 different existing racks negative --- PASS: TestSatisfyReplicaPlacement01x (0.00s) === RUN TestSatisfyReplicaPlacement00x replication: 001 expected true name: test 001 replication: 002 expected true name: test 002 positive replication: 002 expected false name: test 002 negative, repeat the same node replication: 002 expected false name: test 002 negative, enough node already --- PASS: TestSatisfyReplicaPlacement00x (0.00s) === RUN TestSatisfyReplicaPlacement100 replication: 100 expected true name: test 100 --- PASS: TestSatisfyReplicaPlacement100 (0.00s) === RUN TestMisplacedChecking replication: 001 expected true name: test 001 replication: 010 expected false name: test 010 replication: 011 expected false name: test 011 replication: 110 expected true name: test 110 replication: 100 expected true name: test 100 --- PASS: TestMisplacedChecking (0.00s) === RUN TestPickingMisplacedVolumeToDelete replication: 001 name: test 001 command_volume_fix_replication_test.go:435: test 001: picked dn2 001 replication: 100 name: test 100 command_volume_fix_replication_test.go:435: test 100: picked dn2 100 --- PASS: TestPickingMisplacedVolumeToDelete (0.00s) === RUN TestSatisfyReplicaCurrentLocation === RUN TestSatisfyReplicaCurrentLocation/test_001 === RUN TestSatisfyReplicaCurrentLocation/test_010 === RUN TestSatisfyReplicaCurrentLocation/test_011 === RUN TestSatisfyReplicaCurrentLocation/test_110 === RUN TestSatisfyReplicaCurrentLocation/test_100 --- PASS: TestSatisfyReplicaCurrentLocation (0.00s) --- PASS: TestSatisfyReplicaCurrentLocation/test_001 (0.00s) --- PASS: TestSatisfyReplicaCurrentLocation/test_010 (0.00s) --- PASS: TestSatisfyReplicaCurrentLocation/test_011 (0.00s) --- PASS: TestSatisfyReplicaCurrentLocation/test_110 (0.00s) --- PASS: TestSatisfyReplicaCurrentLocation/test_100 (0.00s) === RUN TestParsing --- PASS: TestParsing (0.06s) === RUN TestVolumeServerEvacuate moving volume collection0_15 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection0_21 192.168.1.4:8080 => 192.168.1.6:8080 moving volume collection0_22 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection0_23 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection0_24 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection0_25 192.168.1.4:8080 => 192.168.1.2:8080 moving volume 27 192.168.1.4:8080 => 192.168.1.2:8080 moving volume 28 192.168.1.4:8080 => 192.168.1.2:8080 moving volume 29 192.168.1.4:8080 => 192.168.1.2:8080 moving volume 30 192.168.1.4:8080 => 192.168.1.2:8080 moving volume 31 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_33 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_38 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_51 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_52 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_54 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_63 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_69 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_74 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_80 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_84 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_85 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_97 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_98 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_105 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_106 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_112 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_116 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_119 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_128 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_133 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_136 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_138 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_140 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_144 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_161 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_173 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_174 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_197 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection1_219 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_263 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_272 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_291 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_299 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_301 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_302 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_339 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_345 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_355 192.168.1.4:8080 => 192.168.1.2:8080 moving volume collection2_373 192.168.1.4:8080 => 192.168.1.2:8080 --- PASS: TestVolumeServerEvacuate (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/shell 0.173s === RUN TestRobinCounter --- PASS: TestRobinCounter (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/stats 0.006s === RUN TestUnUsedSpace --- PASS: TestUnUsedSpace (0.00s) === RUN TestFirstInvalidIndex I0603 08:55:54.921868 volume_loading.go:139 checking volume data integrity for volume 1 I0603 08:55:54.922032 volume_loading.go:157 loading memory index /tmp/TestFirstInvalidIndex589262937/001/1.idx to memory --- PASS: TestFirstInvalidIndex (0.00s) === RUN TestFastLoadingNeedleMapMetrics I0603 08:55:54.933660 needle_map_metric_test.go:26 FileCount expected 10000 actual 12026 I0603 08:55:54.933689 needle_map_metric_test.go:27 DeletedSize expected 1666 actual 1666 I0603 08:55:54.933693 needle_map_metric_test.go:28 ContentSize expected 10000 actual 10000 I0603 08:55:54.933696 needle_map_metric_test.go:29 DeletedCount expected 1666 actual 3692 I0603 08:55:54.933698 needle_map_metric_test.go:30 MaxFileKey expected 10000 actual 10000 --- PASS: TestFastLoadingNeedleMapMetrics (0.01s) === RUN TestBinarySearch --- PASS: TestBinarySearch (0.00s) === RUN TestSortVolumeInfos --- PASS: TestSortVolumeInfos (0.00s) === RUN TestReadNeedMetaWithWritesAndUpdates I0603 08:55:54.933863 volume_loading.go:139 checking volume data integrity for volume 1 I0603 08:55:54.933871 volume_loading.go:157 loading memory index /tmp/TestReadNeedMetaWithWritesAndUpdates1182094447/001/1.idx to memory --- PASS: TestReadNeedMetaWithWritesAndUpdates (0.00s) === RUN TestReadNeedMetaWithDeletesThenWrites I0603 08:55:54.934222 volume_loading.go:139 checking volume data integrity for volume 1 I0603 08:55:54.934232 volume_loading.go:157 loading memory index /tmp/TestReadNeedMetaWithDeletesThenWrites2259484729/001/1.idx to memory --- PASS: TestReadNeedMetaWithDeletesThenWrites (0.00s) === RUN TestMakeDiff --- PASS: TestMakeDiff (0.00s) === RUN TestMemIndexCompaction I0603 08:55:54.934479 volume_loading.go:139 checking volume data integrity for volume 1 I0603 08:55:54.934485 volume_loading.go:157 loading memory index /tmp/TestMemIndexCompaction3602002386/001/1.idx to memory I0603 08:55:55.026968 needle_map_memory.go:111 loading idx from offset 0 for file: /tmp/TestMemIndexCompaction3602002386/001/1.cpx volume_vacuum_test.go:92: compaction speed: 91546492.89 bytes/s I0603 08:55:55.108283 volume_vacuum.go:114 Committing volume 1 vacuuming... I0603 08:55:55.189586 needle_map_memory.go:111 loading idx from offset 9701 for file: /tmp/TestMemIndexCompaction3602002386/001/1.cpx I0603 08:55:55.194834 volume_loading.go:98 readSuperBlock volume 1 version 3 I0603 08:55:55.194850 volume_loading.go:139 checking volume data integrity for volume 1 I0603 08:55:55.194857 volume_loading.go:154 updating memory compact index /tmp/TestMemIndexCompaction3602002386/001/1.idx volume_vacuum_test.go:110: realRecordCount:29701, v.FileCount():29701 mm.DeletedCount():9829 I0603 08:55:55.194899 volume_loading.go:98 readSuperBlock volume 1 version 3 I0603 08:55:55.194912 volume_loading.go:139 checking volume data integrity for volume 1 I0603 08:55:55.194917 volume_loading.go:157 loading memory index /tmp/TestMemIndexCompaction3602002386/001/1.idx to memory --- PASS: TestMemIndexCompaction (0.29s) === RUN TestLDBIndexCompaction I0603 08:55:55.224542 volume_loading.go:139 checking volume data integrity for volume 1 I0603 08:55:55.224554 volume_loading.go:172 loading leveldb index /tmp/TestLDBIndexCompaction1151605393/001/1.ldb I0603 08:55:55.225037 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestLDBIndexCompaction1151605393/001/1.ldb, watermark 0, num of entries:0 I0603 08:55:55.225900 needle_map_leveldb.go:66 Loading /tmp/TestLDBIndexCompaction1151605393/001/1.ldb... , watermark: 0 I0603 08:55:55.340557 needle_map_leveldb.go:338 loading idx to leveldb from offset 0 for file: /tmp/TestLDBIndexCompaction1151605393/001/1.cpx volume_vacuum_test.go:92: compaction speed: 87165005.03 bytes/s I0603 08:55:55.557204 volume_vacuum.go:114 Committing volume 1 vacuuming... I0603 08:55:55.627405 needle_map_leveldb.go:338 loading idx to leveldb from offset 9727 for file: /tmp/TestLDBIndexCompaction1151605393/001/1.cpx I0603 08:55:55.684136 volume_loading.go:98 readSuperBlock volume 1 version 3 I0603 08:55:55.684165 volume_loading.go:139 checking volume data integrity for volume 1 I0603 08:55:55.684174 volume_loading.go:169 updating leveldb index /tmp/TestLDBIndexCompaction1151605393/001/1.ldb volume_vacuum_test.go:105: watermark from levelDB: 20000, realWatermark: 20000, nm.recordCount: 29727, realRecordCount:29727, fileCount=29727, deletedcount:9719 I0603 08:55:55.695996 volume_loading.go:98 readSuperBlock volume 1 version 3 I0603 08:55:55.696007 volume_loading.go:139 checking volume data integrity for volume 1 I0603 08:55:55.696018 volume_loading.go:172 loading leveldb index /tmp/TestLDBIndexCompaction1151605393/001/1.ldb I0603 08:55:55.696711 needle_map_leveldb.go:66 Loading /tmp/TestLDBIndexCompaction1151605393/001/1.ldb... , watermark: 20000 --- PASS: TestLDBIndexCompaction (0.52s) === RUN TestSearchVolumesWithDeletedNeedles I0603 08:55:55.742859 volume_loading.go:139 checking volume data integrity for volume 1 I0603 08:55:55.742874 volume_loading.go:157 loading memory index /tmp/TestSearchVolumesWithDeletedNeedles3618782781/001/1.idx to memory offset: 12872, isLast: false --- PASS: TestSearchVolumesWithDeletedNeedles (0.00s) === RUN TestDestroyEmptyVolumeWithOnlyEmpty I0603 08:55:55.743066 volume_loading.go:139 checking volume data integrity for volume 1 I0603 08:55:55.743074 volume_loading.go:157 loading memory index /tmp/TestDestroyEmptyVolumeWithOnlyEmpty1887119310/001/1.idx to memory --- PASS: TestDestroyEmptyVolumeWithOnlyEmpty (0.00s) === RUN TestDestroyEmptyVolumeWithoutOnlyEmpty I0603 08:55:55.743304 volume_loading.go:139 checking volume data integrity for volume 1 I0603 08:55:55.743310 volume_loading.go:157 loading memory index /tmp/TestDestroyEmptyVolumeWithoutOnlyEmpty3268708928/001/1.idx to memory --- PASS: TestDestroyEmptyVolumeWithoutOnlyEmpty (0.00s) === RUN TestDestroyNonemptyVolumeWithOnlyEmpty I0603 08:55:55.743469 volume_loading.go:139 checking volume data integrity for volume 1 I0603 08:55:55.743472 volume_loading.go:157 loading memory index /tmp/TestDestroyNonemptyVolumeWithOnlyEmpty2393585397/001/1.idx to memory --- PASS: TestDestroyNonemptyVolumeWithOnlyEmpty (0.00s) === RUN TestDestroyNonemptyVolumeWithoutOnlyEmpty I0603 08:55:55.743565 volume_loading.go:139 checking volume data integrity for volume 1 I0603 08:55:55.743569 volume_loading.go:157 loading memory index /tmp/TestDestroyNonemptyVolumeWithoutOnlyEmpty1867719174/001/1.idx to memory --- PASS: TestDestroyNonemptyVolumeWithoutOnlyEmpty (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/storage 0.838s ? github.com/seaweedfs/seaweedfs/weed/storage/backend [no test files] === RUN TestMemoryMapMaxSizeReadWrite --- PASS: TestMemoryMapMaxSizeReadWrite (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/storage/backend/memory_map 0.001s ? github.com/seaweedfs/seaweedfs/weed/storage/backend/rclone_backend [no test files] ? github.com/seaweedfs/seaweedfs/weed/storage/backend/s3_backend [no test files] === RUN TestEncodingDecoding I0603 08:55:55.752315 ec_encoder.go:81 encodeDatFile 1.dat size:2590912 --- PASS: TestEncodingDecoding (0.21s) === RUN TestLocateData [{BlockIndex:5 InnerBlockOffset:100 Size:9900 IsLargeBlock:true LargeBlockRowsCount:1} {BlockIndex:6 InnerBlockOffset:0 Size:10000 IsLargeBlock:true LargeBlockRowsCount:1} {BlockIndex:7 InnerBlockOffset:0 Size:10000 IsLargeBlock:true LargeBlockRowsCount:1} {BlockIndex:8 InnerBlockOffset:0 Size:10000 IsLargeBlock:true LargeBlockRowsCount:1} {BlockIndex:9 InnerBlockOffset:0 Size:10000 IsLargeBlock:true LargeBlockRowsCount:1} {BlockIndex:0 InnerBlockOffset:0 Size:1 IsLargeBlock:false LargeBlockRowsCount:1}] --- PASS: TestLocateData (0.00s) === RUN TestLocateData2 --- PASS: TestLocateData2 (0.00s) === RUN TestLocateData3 {BlockIndex:8876 InnerBlockOffset:912752 Size:112568 IsLargeBlock:false LargeBlockRowsCount:2} --- PASS: TestLocateData3 (0.00s) === RUN TestPositioning offset: 31300679656 size: 1167 offset: 11513014944 size: 66044 offset: 26311863528 size: 26823 interval: {BlockIndex:14852 InnerBlockOffset:994536 Size:26856 IsLargeBlock:false LargeBlockRowsCount:1}, shardId: 2, shardOffset: 2631871720 --- PASS: TestPositioning (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding 0.215s ? github.com/seaweedfs/seaweedfs/weed/storage/idx [no test files] === RUN TestParseFileIdFromString --- PASS: TestParseFileIdFromString (0.00s) === RUN TestParseKeyHash --- PASS: TestParseKeyHash (0.00s) === RUN TestAppend --- PASS: TestAppend (0.00s) === RUN TestNewVolumeId volume_id_test.go:11: a is not legal volume id, strconv.ParseUint: parsing "a": invalid syntax --- PASS: TestNewVolumeId (0.00s) === RUN TestVolumeId_String --- PASS: TestVolumeId_String (0.00s) === RUN TestVolumeId_Next --- PASS: TestVolumeId_Next (0.00s) === RUN TestTTLReadWrite --- PASS: TestTTLReadWrite (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/storage/needle 0.006s === RUN TestMemoryUsage Each 15.12 Bytes Alloc = 24 MiB TotalAlloc = 114 MiB Sys = 49 MiB NumGC = 16 Taken = 772.694628ms Each 14.90 Bytes Alloc = 48 MiB TotalAlloc = 227 MiB Sys = 85 MiB NumGC = 20 Taken = 793.403457ms Each 14.83 Bytes Alloc = 71 MiB TotalAlloc = 341 MiB Sys = 125 MiB NumGC = 23 Taken = 733.986836ms Each 14.79 Bytes Alloc = 95 MiB TotalAlloc = 454 MiB Sys = 157 MiB NumGC = 25 Taken = 719.829898ms Each 14.77 Bytes Alloc = 119 MiB TotalAlloc = 568 MiB Sys = 202 MiB NumGC = 27 Taken = 714.006234ms Each 14.75 Bytes Alloc = 143 MiB TotalAlloc = 681 MiB Sys = 246 MiB NumGC = 28 Taken = 722.9071ms Each 14.74 Bytes Alloc = 166 MiB TotalAlloc = 795 MiB Sys = 270 MiB NumGC = 29 Taken = 715.596928ms Each 14.73 Bytes Alloc = 190 MiB TotalAlloc = 908 MiB Sys = 294 MiB NumGC = 30 Taken = 718.282485ms Each 14.73 Bytes Alloc = 214 MiB TotalAlloc = 1022 MiB Sys = 322 MiB NumGC = 31 Taken = 725.84904ms Each 14.72 Bytes Alloc = 238 MiB TotalAlloc = 1135 MiB Sys = 346 MiB NumGC = 32 Taken = 712.470433ms --- PASS: TestMemoryUsage (7.33s) === RUN TestSnowflakeSequencer I0603 08:56:03.078539 snowflake_sequencer.go:21 use snowflake seq id generator, nodeid:for_test hex_of_nodeid: 1 --- PASS: TestSnowflakeSequencer (0.05s) === RUN TestOverflow2 needle key: 150073 needle key: 150076 needle key: 150088 needle key: 150089 needle key: 150124 needle key: 150137 needle key: 150145 needle key: 150147 needle key: 150158 needle key: 150162 --- PASS: TestOverflow2 (0.00s) === RUN TestIssue52 key 10002 ok true 10002, 1250, 10002 key 10002 ok true 10002, 1250, 10002 --- PASS: TestIssue52 (0.00s) === RUN TestCompactMap --- PASS: TestCompactMap (0.04s) === RUN TestOverflow overflow[ 0 ]: 1 overflow[ 1 ]: 2 overflow[ 2 ]: 3 overflow[ 3 ]: 4 overflow[ 4 ]: 5 overflow[ 0 ]: 1 size -12 overflow[ 1 ]: 2 size 12 overflow[ 2 ]: 3 size 24 overflow[ 3 ]: 4 size -12 overflow[ 4 ]: 5 size 12 overflow[ 0 ]: 1 overflow[ 1 ]: 2 overflow[ 2 ]: 3 overflow[ 3 ]: 4 overflow[ 4 ]: 5 overflow[ 0 ]: 1 overflow[ 1 ]: 2 overflow[ 2 ]: 3 overflow[ 3 ]: 4 overflow[ 4 ]: 5 --- PASS: TestOverflow (0.00s) === RUN TestCompactSection_Get compact_map_test.go:202: 1574318345753513987 compact_map_test.go:213: 1574318350048481283 --- PASS: TestCompactSection_Get (0.71s) === RUN TestCompactSection_PutOutOfOrderItemBeyondLookBackWindow --- PASS: TestCompactSection_PutOutOfOrderItemBeyondLookBackWindow (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/storage/needle_map 8.142s === RUN TestReplicaPlacementSerialDeserial --- PASS: TestReplicaPlacementSerialDeserial (0.00s) === RUN TestReplicaPlacementHasReplication === RUN TestReplicaPlacementHasReplication/empty_replica_placement === RUN TestReplicaPlacementHasReplication/no_replication === RUN TestReplicaPlacementHasReplication/same_rack_replication === RUN TestReplicaPlacementHasReplication/diff_rack_replication === RUN TestReplicaPlacementHasReplication/DC_replication === RUN TestReplicaPlacementHasReplication/full_replication --- PASS: TestReplicaPlacementHasReplication (0.00s) --- PASS: TestReplicaPlacementHasReplication/empty_replica_placement (0.00s) --- PASS: TestReplicaPlacementHasReplication/no_replication (0.00s) --- PASS: TestReplicaPlacementHasReplication/same_rack_replication (0.00s) --- PASS: TestReplicaPlacementHasReplication/diff_rack_replication (0.00s) --- PASS: TestReplicaPlacementHasReplication/DC_replication (0.00s) --- PASS: TestReplicaPlacementHasReplication/full_replication (0.00s) === RUN TestSuperBlockReadWrite --- PASS: TestSuperBlockReadWrite (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/storage/super_block 0.006s ? github.com/seaweedfs/seaweedfs/weed/storage/types [no test files] ? github.com/seaweedfs/seaweedfs/weed/storage/volume_info [no test files] === RUN TestRemoveDataCenter data: map[dc1:map[rack1:map[server111:map[limit:3 volumes:[map[id:1 size:12312] map[id:2 size:12312] map[id:3 size:12312]]] server112:map[limit:10 volumes:[map[id:4 size:12312] map[id:5 size:12312] map[id:6 size:12312]]]] rack2:map[server121:map[limit:4 volumes:[map[id:4 size:12312] map[id:5 size:12312] map[id:6 size:12312]]] server122:map[limit:4 volumes:[]] server123:map[limit:5 volumes:[map[id:2 size:12312] map[id:3 size:12312] map[id:4 size:12312]]]]] dc2:map[] dc3:map[rack2:map[server321:map[limit:4 volumes:[map[id:1 size:12312] map[id:3 size:12312] map[id:5 size:12312]]]]]] I0603 08:55:56.008400 node.go:250 weedfs adds child dc1 I0603 08:55:56.008523 node.go:250 weedfs:dc1 adds child rack1 I0603 08:55:56.008529 node.go:250 weedfs:dc1:rack1 adds child server111 I0603 08:55:56.008532 node.go:250 weedfs:dc1:rack1:server111 adds child I0603 08:55:56.008537 node.go:250 weedfs:dc1:rack1 adds child server112 I0603 08:55:56.008539 node.go:250 weedfs:dc1:rack1:server112 adds child I0603 08:55:56.008541 node.go:250 weedfs:dc1 adds child rack2 I0603 08:55:56.008543 node.go:250 weedfs:dc1:rack2 adds child server121 I0603 08:55:56.008545 node.go:250 weedfs:dc1:rack2:server121 adds child I0603 08:55:56.008547 node.go:250 weedfs:dc1:rack2 adds child server122 I0603 08:55:56.008551 node.go:250 weedfs:dc1:rack2:server122 adds child I0603 08:55:56.008552 node.go:250 weedfs:dc1:rack2 adds child server123 I0603 08:55:56.008554 node.go:250 weedfs:dc1:rack2:server123 adds child I0603 08:55:56.008557 node.go:250 weedfs adds child dc2 I0603 08:55:56.008560 node.go:250 weedfs adds child dc3 I0603 08:55:56.008561 node.go:250 weedfs:dc3 adds child rack2 I0603 08:55:56.008563 node.go:250 weedfs:dc3:rack2 adds child server321 I0603 08:55:56.008564 node.go:250 weedfs:dc3:rack2:server321 adds child I0603 08:55:56.008567 node.go:264 weedfs removes dc2 I0603 08:55:56.008570 node.go:264 weedfs removes dc3 --- PASS: TestRemoveDataCenter (0.00s) === RUN TestHandlingVolumeServerHeartbeat I0603 08:55:56.008591 node.go:250 weedfs adds child dc1 I0603 08:55:56.008594 node.go:250 weedfs:dc1 adds child rack1 I0603 08:55:56.008597 node.go:250 weedfs:dc1:rack1 adds child 127.0.0.1:34534 I0603 08:55:56.008600 node.go:250 weedfs:dc1:rack1:127.0.0.1:34534 adds child ssd I0603 08:55:56.008602 node.go:250 weedfs:dc1:rack1:127.0.0.1:34534 adds child I0603 08:55:56.008628 volume_layout.go:417 Volume 1 becomes writable I0603 08:55:56.008640 volume_layout.go:417 Volume 2 becomes writable I0603 08:55:56.008642 volume_layout.go:417 Volume 3 becomes writable I0603 08:55:56.008644 volume_layout.go:417 Volume 4 becomes writable I0603 08:55:56.008646 volume_layout.go:417 Volume 5 becomes writable I0603 08:55:56.008648 volume_layout.go:417 Volume 6 becomes writable I0603 08:55:56.008650 volume_layout.go:417 Volume 7 becomes writable I0603 08:55:56.008653 volume_layout.go:417 Volume 8 becomes writable I0603 08:55:56.008655 volume_layout.go:417 Volume 9 becomes writable I0603 08:55:56.008657 volume_layout.go:417 Volume 10 becomes writable I0603 08:55:56.008660 volume_layout.go:417 Volume 11 becomes writable I0603 08:55:56.008662 volume_layout.go:417 Volume 12 becomes writable I0603 08:55:56.008664 volume_layout.go:417 Volume 13 becomes writable I0603 08:55:56.008665 volume_layout.go:417 Volume 14 becomes writable I0603 08:55:56.008674 data_node.go:81 Deleting volume id: 8 I0603 08:55:56.008678 data_node.go:81 Deleting volume id: 9 I0603 08:55:56.008679 data_node.go:81 Deleting volume id: 10 I0603 08:55:56.008680 data_node.go:81 Deleting volume id: 11 I0603 08:55:56.008682 data_node.go:81 Deleting volume id: 12 I0603 08:55:56.008683 data_node.go:81 Deleting volume id: 13 I0603 08:55:56.008684 data_node.go:81 Deleting volume id: 14 I0603 08:55:56.008686 data_node.go:81 Deleting volume id: 7 I0603 08:55:56.008689 topology.go:329 removing volume info: Id:8, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0603 08:55:56.008700 volume_layout.go:229 volume 8 does not have enough copies I0603 08:55:56.008702 volume_layout.go:237 volume 8 remove from writable I0603 08:55:56.008704 volume_layout.go:405 Volume 8 becomes unwritable I0603 08:55:56.008706 topology.go:329 removing volume info: Id:9, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0603 08:55:56.008708 volume_layout.go:229 volume 9 does not have enough copies I0603 08:55:56.008710 volume_layout.go:237 volume 9 remove from writable I0603 08:55:56.008712 volume_layout.go:405 Volume 9 becomes unwritable I0603 08:55:56.008713 topology.go:329 removing volume info: Id:10, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0603 08:55:56.008716 volume_layout.go:229 volume 10 does not have enough copies I0603 08:55:56.008718 volume_layout.go:237 volume 10 remove from writable I0603 08:55:56.008719 volume_layout.go:405 Volume 10 becomes unwritable I0603 08:55:56.008721 topology.go:329 removing volume info: Id:11, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0603 08:55:56.008724 volume_layout.go:229 volume 11 does not have enough copies I0603 08:55:56.008725 volume_layout.go:237 volume 11 remove from writable I0603 08:55:56.008727 volume_layout.go:405 Volume 11 becomes unwritable I0603 08:55:56.008729 topology.go:329 removing volume info: Id:12, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0603 08:55:56.008732 volume_layout.go:229 volume 12 does not have enough copies I0603 08:55:56.008734 volume_layout.go:237 volume 12 remove from writable I0603 08:55:56.008736 volume_layout.go:405 Volume 12 becomes unwritable I0603 08:55:56.008737 topology.go:329 removing volume info: Id:13, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0603 08:55:56.008740 volume_layout.go:229 volume 13 does not have enough copies I0603 08:55:56.008744 volume_layout.go:237 volume 13 remove from writable I0603 08:55:56.008745 volume_layout.go:405 Volume 13 becomes unwritable I0603 08:55:56.008747 topology.go:329 removing volume info: Id:14, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0603 08:55:56.008750 volume_layout.go:229 volume 14 does not have enough copies I0603 08:55:56.008751 volume_layout.go:237 volume 14 remove from writable I0603 08:55:56.008754 volume_layout.go:405 Volume 14 becomes unwritable I0603 08:55:56.008756 topology.go:329 removing volume info: Id:7, Size:25432, ReplicaPlacement:000, Collection:, Version:3, FileCount:2343, DeleteCount:345, DeletedByteCount:34524, ReadOnly:false from 127.0.0.1:34534 I0603 08:55:56.008758 volume_layout.go:229 volume 7 does not have enough copies I0603 08:55:56.008760 volume_layout.go:237 volume 7 remove from writable I0603 08:55:56.008762 volume_layout.go:405 Volume 7 becomes unwritable I0603 08:55:56.008766 topology.go:329 removing volume info: Id:3, Size:0, ReplicaPlacement:000, Collection:, Version:3, FileCount:0, DeleteCount:0, DeletedByteCount:0, ReadOnly:false from 127.0.0.1:34534 I0603 08:55:56.008777 volume_layout.go:229 volume 3 does not have enough copies I0603 08:55:56.008780 volume_layout.go:237 volume 3 remove from writable I0603 08:55:56.008782 volume_layout.go:405 Volume 3 becomes unwritable I0603 08:55:56.008786 volume_layout.go:417 Volume 3 becomes writable after add volume id 1 after add volume id 2 after add volume id 3 after add volume id 4 after add volume id 5 after add volume id 6 after add writable volume id 1 after add writable volume id 2 after add writable volume id 4 after add writable volume id 5 after add writable volume id 6 after add writable volume id 3 I0603 08:55:56.008805 topology_event_handling.go:86 Removing Volume 5 from the dead volume server 127.0.0.1:34534 I0603 08:55:56.008809 volume_layout.go:456 Volume 5 has 0 replica, less than required 1 I0603 08:55:56.008810 volume_layout.go:405 Volume 5 becomes unwritable I0603 08:55:56.008812 topology_event_handling.go:86 Removing Volume 6 from the dead volume server 127.0.0.1:34534 I0603 08:55:56.008814 volume_layout.go:456 Volume 6 has 0 replica, less than required 1 I0603 08:55:56.008818 volume_layout.go:405 Volume 6 becomes unwritable I0603 08:55:56.008819 topology_event_handling.go:86 Removing Volume 1 from the dead volume server 127.0.0.1:34534 I0603 08:55:56.008821 volume_layout.go:456 Volume 1 has 0 replica, less than required 1 I0603 08:55:56.008822 volume_layout.go:405 Volume 1 becomes unwritable I0603 08:55:56.008824 topology_event_handling.go:86 Removing Volume 2 from the dead volume server 127.0.0.1:34534 I0603 08:55:56.008827 volume_layout.go:456 Volume 2 has 0 replica, less than required 1 I0603 08:55:56.008828 volume_layout.go:405 Volume 2 becomes unwritable I0603 08:55:56.008829 topology_event_handling.go:86 Removing Volume 3 from the dead volume server 127.0.0.1:34534 I0603 08:55:56.008831 volume_layout.go:456 Volume 3 has 0 replica, less than required 1 I0603 08:55:56.008832 volume_layout.go:405 Volume 3 becomes unwritable I0603 08:55:56.008834 topology_event_handling.go:86 Removing Volume 4 from the dead volume server 127.0.0.1:34534 I0603 08:55:56.008838 volume_layout.go:456 Volume 4 has 0 replica, less than required 1 I0603 08:55:56.008839 volume_layout.go:405 Volume 4 becomes unwritable I0603 08:55:56.008844 node.go:264 weedfs:dc1:rack1 removes 127.0.0.1:34534 --- PASS: TestHandlingVolumeServerHeartbeat (0.00s) === RUN TestAddRemoveVolume I0603 08:55:56.008869 node.go:250 weedfs adds child dc1 I0603 08:55:56.008871 node.go:250 weedfs:dc1 adds child rack1 I0603 08:55:56.008873 node.go:250 weedfs:dc1:rack1 adds child 127.0.0.1:34534 I0603 08:55:56.008877 node.go:250 weedfs:dc1:rack1:127.0.0.1:34534 adds child I0603 08:55:56.008879 node.go:250 weedfs:dc1:rack1:127.0.0.1:34534 adds child ssd I0603 08:55:56.008887 volume_layout.go:417 Volume 1 becomes writable I0603 08:55:56.008891 topology.go:329 removing volume info: Id:1, Size:100, ReplicaPlacement:000, Collection:xcollection, Version:3, FileCount:123, DeleteCount:23, DeletedByteCount:45, ReadOnly:false from 127.0.0.1:34534 I0603 08:55:56.008894 volume_layout.go:229 volume 1 does not have enough copies I0603 08:55:56.008895 volume_layout.go:237 volume 1 remove from writable I0603 08:55:56.008897 volume_layout.go:405 Volume 1 becomes unwritable --- PASS: TestAddRemoveVolume (0.00s) === RUN TestListCollections I0603 08:55:56.008910 node.go:250 weedfs adds child dc1 I0603 08:55:56.008912 node.go:250 weedfs:dc1 adds child rack1 I0603 08:55:56.008914 node.go:250 weedfs:dc1:rack1 adds child 127.0.0.1:34534 I0603 08:55:56.008917 volume_layout.go:229 volume 1111 does not have enough copies I0603 08:55:56.008919 volume_layout.go:237 volume 1111 remove from writable I0603 08:55:56.008922 volume_layout.go:229 volume 2222 does not have enough copies I0603 08:55:56.008925 volume_layout.go:237 volume 2222 remove from writable I0603 08:55:56.008928 volume_layout.go:229 volume 3333 does not have enough copies I0603 08:55:56.008930 volume_layout.go:237 volume 3333 remove from writable === RUN TestListCollections/no_volume_types_selected === RUN TestListCollections/normal_volumes === RUN TestListCollections/EC_volumes === RUN TestListCollections/normal_+_EC_volumes --- PASS: TestListCollections (0.00s) --- PASS: TestListCollections/no_volume_types_selected (0.00s) --- PASS: TestListCollections/normal_volumes (0.00s) --- PASS: TestListCollections/EC_volumes (0.00s) --- PASS: TestListCollections/normal_+_EC_volumes (0.00s) === RUN TestFindEmptySlotsForOneVolume data: map[dc1:map[rack1:map[server111:map[limit:3 volumes:[map[id:1 size:12312] map[id:2 size:12312] map[id:3 size:12312]]] server112:map[limit:10 volumes:[map[id:4 size:12312] map[id:5 size:12312] map[id:6 size:12312]]]] rack2:map[server121:map[limit:4 volumes:[map[id:4 size:12312] map[id:5 size:12312] map[id:6 size:12312]]] server122:map[limit:4 volumes:[]] server123:map[limit:5 volumes:[map[id:2 size:12312] map[id:3 size:12312] map[id:4 size:12312]]]]] dc2:map[] dc3:map[rack2:map[server321:map[limit:4 volumes:[map[id:1 size:12312] map[id:3 size:12312] map[id:5 size:12312]]]]]] I0603 08:55:56.009026 node.go:250 weedfs adds child dc1 I0603 08:55:56.009029 node.go:250 weedfs:dc1 adds child rack1 I0603 08:55:56.009030 node.go:250 weedfs:dc1:rack1 adds child server111 I0603 08:55:56.009032 node.go:250 weedfs:dc1:rack1:server111 adds child I0603 08:55:56.009035 node.go:250 weedfs:dc1:rack1 adds child server112 I0603 08:55:56.009037 node.go:250 weedfs:dc1:rack1:server112 adds child I0603 08:55:56.009039 node.go:250 weedfs:dc1 adds child rack2 I0603 08:55:56.009040 node.go:250 weedfs:dc1:rack2 adds child server121 I0603 08:55:56.009042 node.go:250 weedfs:dc1:rack2:server121 adds child I0603 08:55:56.009044 node.go:250 weedfs:dc1:rack2 adds child server122 I0603 08:55:56.009046 node.go:250 weedfs:dc1:rack2:server122 adds child I0603 08:55:56.009048 node.go:250 weedfs:dc1:rack2 adds child server123 I0603 08:55:56.009049 node.go:250 weedfs:dc1:rack2:server123 adds child I0603 08:55:56.009052 node.go:250 weedfs adds child dc2 I0603 08:55:56.009053 node.go:250 weedfs adds child dc3 I0603 08:55:56.009055 node.go:250 weedfs:dc3 adds child rack2 I0603 08:55:56.009057 node.go:250 weedfs:dc3:rack2 adds child server321 I0603 08:55:56.009058 node.go:250 weedfs:dc3:rack2:server321 adds child assigned node : server123 assigned node : server122 assigned node : server121 --- PASS: TestFindEmptySlotsForOneVolume (0.00s) === RUN TestReplication011 data: map[dc1:map[rack1:map[server111:map[limit:300 volumes:[map[id:1 size:12312] map[id:2 size:12312] map[id:3 size:12312]]] server112:map[limit:300 volumes:[map[id:4 size:12312] map[id:5 size:12312] map[id:6 size:12312]]] server113:map[limit:300 volumes:[]] server114:map[limit:300 volumes:[]] server115:map[limit:300 volumes:[]] server116:map[limit:300 volumes:[]]] rack2:map[server121:map[limit:300 volumes:[map[id:4 size:12312] map[id:5 size:12312] map[id:6 size:12312]]] server122:map[limit:300 volumes:[]] server123:map[limit:300 volumes:[map[id:2 size:12312] map[id:3 size:12312] map[id:4 size:12312]]] server124:map[limit:300 volumes:[]] server125:map[limit:300 volumes:[]] server126:map[limit:300 volumes:[]]] rack3:map[server131:map[limit:300 volumes:[]] server132:map[limit:300 volumes:[]] server133:map[limit:300 volumes:[]] server134:map[limit:300 volumes:[]] server135:map[limit:300 volumes:[]] server136:map[limit:300 volumes:[]]]]] I0603 08:55:56.009133 node.go:250 weedfs adds child dc1 I0603 08:55:56.009135 node.go:250 weedfs:dc1 adds child rack1 I0603 08:55:56.009138 node.go:250 weedfs:dc1:rack1 adds child server115 I0603 08:55:56.009140 node.go:250 weedfs:dc1:rack1:server115 adds child I0603 08:55:56.009142 node.go:250 weedfs:dc1:rack1 adds child server116 I0603 08:55:56.009143 node.go:250 weedfs:dc1:rack1:server116 adds child I0603 08:55:56.009145 node.go:250 weedfs:dc1:rack1 adds child server111 I0603 08:55:56.009148 node.go:250 weedfs:dc1:rack1:server111 adds child I0603 08:55:56.009154 node.go:250 weedfs:dc1:rack1 adds child server112 I0603 08:55:56.009156 node.go:250 weedfs:dc1:rack1:server112 adds child I0603 08:55:56.009158 node.go:250 weedfs:dc1:rack1 adds child server113 I0603 08:55:56.009160 node.go:250 weedfs:dc1:rack1:server113 adds child I0603 08:55:56.009162 node.go:250 weedfs:dc1:rack1 adds child server114 I0603 08:55:56.009163 node.go:250 weedfs:dc1:rack1:server114 adds child I0603 08:55:56.009165 node.go:250 weedfs:dc1 adds child rack2 I0603 08:55:56.009167 node.go:250 weedfs:dc1:rack2 adds child server123 I0603 08:55:56.009168 node.go:250 weedfs:dc1:rack2:server123 adds child I0603 08:55:56.009171 node.go:250 weedfs:dc1:rack2 adds child server124 I0603 08:55:56.009172 node.go:250 weedfs:dc1:rack2:server124 adds child I0603 08:55:56.009174 node.go:250 weedfs:dc1:rack2 adds child server125 I0603 08:55:56.009176 node.go:250 weedfs:dc1:rack2:server125 adds child I0603 08:55:56.009178 node.go:250 weedfs:dc1:rack2 adds child server126 I0603 08:55:56.009185 node.go:250 weedfs:dc1:rack2:server126 adds child I0603 08:55:56.009187 node.go:250 weedfs:dc1:rack2 adds child server121 I0603 08:55:56.009188 node.go:250 weedfs:dc1:rack2:server121 adds child I0603 08:55:56.009191 node.go:250 weedfs:dc1:rack2 adds child server122 I0603 08:55:56.009192 node.go:250 weedfs:dc1:rack2:server122 adds child I0603 08:55:56.009194 node.go:250 weedfs:dc1 adds child rack3 I0603 08:55:56.009195 node.go:250 weedfs:dc1:rack3 adds child server132 I0603 08:55:56.009197 node.go:250 weedfs:dc1:rack3:server132 adds child I0603 08:55:56.009199 node.go:250 weedfs:dc1:rack3 adds child server133 I0603 08:55:56.009200 node.go:250 weedfs:dc1:rack3:server133 adds child I0603 08:55:56.009202 node.go:250 weedfs:dc1:rack3 adds child server134 I0603 08:55:56.009203 node.go:250 weedfs:dc1:rack3:server134 adds child I0603 08:55:56.009205 node.go:250 weedfs:dc1:rack3 adds child server135 I0603 08:55:56.009206 node.go:250 weedfs:dc1:rack3:server135 adds child I0603 08:55:56.009208 node.go:250 weedfs:dc1:rack3 adds child server136 I0603 08:55:56.009209 node.go:250 weedfs:dc1:rack3:server136 adds child I0603 08:55:56.009211 node.go:250 weedfs:dc1:rack3 adds child server131 I0603 08:55:56.009212 node.go:250 weedfs:dc1:rack3:server131 adds child assigned node : server125 assigned node : server122 assigned node : server114 --- PASS: TestReplication011 (0.00s) === RUN TestFindEmptySlotsForOneVolumeScheduleByWeight data: map[dc1:map[rack1:map[server111:map[limit:2000 volumes:[]]]] dc2:map[rack2:map[server222:map[limit:2000 volumes:[]]]] dc3:map[rack3:map[server333:map[limit:1000 volumes:[]]]] dc4:map[rack4:map[server444:map[limit:1000 volumes:[]]]] dc5:map[rack5:map[server555:map[limit:500 volumes:[]]]] dc6:map[rack6:map[server666:map[limit:500 volumes:[]]]]] I0603 08:55:56.009254 node.go:250 weedfs adds child dc1 I0603 08:55:56.009256 node.go:250 weedfs:dc1 adds child rack1 I0603 08:55:56.009258 node.go:250 weedfs:dc1:rack1 adds child server111 I0603 08:55:56.009260 node.go:250 weedfs:dc1:rack1:server111 adds child I0603 08:55:56.009261 node.go:250 weedfs adds child dc2 I0603 08:55:56.009263 node.go:250 weedfs:dc2 adds child rack2 I0603 08:55:56.009264 node.go:250 weedfs:dc2:rack2 adds child server222 I0603 08:55:56.009267 node.go:250 weedfs:dc2:rack2:server222 adds child I0603 08:55:56.009269 node.go:250 weedfs adds child dc3 I0603 08:55:56.009274 node.go:250 weedfs:dc3 adds child rack3 I0603 08:55:56.009275 node.go:250 weedfs:dc3:rack3 adds child server333 I0603 08:55:56.009277 node.go:250 weedfs:dc3:rack3:server333 adds child I0603 08:55:56.009280 node.go:250 weedfs adds child dc4 I0603 08:55:56.009281 node.go:250 weedfs:dc4 adds child rack4 I0603 08:55:56.009283 node.go:250 weedfs:dc4:rack4 adds child server444 I0603 08:55:56.009285 node.go:250 weedfs:dc4:rack4:server444 adds child I0603 08:55:56.009288 node.go:250 weedfs adds child dc5 I0603 08:55:56.009290 node.go:250 weedfs:dc5 adds child rack5 I0603 08:55:56.009294 node.go:250 weedfs:dc5:rack5 adds child server555 I0603 08:55:56.009296 node.go:250 weedfs:dc5:rack5:server555 adds child I0603 08:55:56.009298 node.go:250 weedfs adds child dc6 I0603 08:55:56.009300 node.go:250 weedfs:dc6 adds child rack6 I0603 08:55:56.009302 node.go:250 weedfs:dc6:rack6 adds child server666 I0603 08:55:56.009305 node.go:250 weedfs:dc6:rack6:server666 adds child server111 : 557 server333 : 309 server666 : 169 server444 : 286 server555 : 151 server222 : 528 --- PASS: TestFindEmptySlotsForOneVolumeScheduleByWeight (0.00s) === RUN TestPickForWrite data: map[dc1:map[rack1:map[serverdc111:map[ip:127.0.0.1 limit:100 volumes:[map[collection:test id:1 replication:001 size:12312] map[collection:test id:2 replication:100 size:12312] map[collection:test id:4 replication:100 size:12312] map[collection:test id:6 replication:010 size:12312]]]]] dc2:map[rack1:map[serverdc211:map[ip:127.0.0.2 limit:100 volumes:[map[collection:test id:2 replication:100 size:12312] map[collection:test id:3 replication:010 size:12312] map[collection:test id:5 replication:001 size:12312] map[collection:test id:6 replication:010 size:12312]]]]] dc3:map[rack1:map[serverdc311:map[ip:127.0.0.3 limit:100 volumes:[map[collection:test id:1 replication:001 size:12312] map[collection:test id:3 replication:010 size:12312] map[collection:test id:4 replication:100 size:12312] map[collection:test id:5 replication:001 size:12312]]]]]] I0603 08:55:56.010755 node.go:250 weedfs adds child dc1 I0603 08:55:56.010760 node.go:250 weedfs:dc1 adds child rack1 I0603 08:55:56.010762 node.go:250 weedfs:dc1:rack1 adds child serverdc111 I0603 08:55:56.010766 volume_layout.go:417 Volume 1 becomes writable I0603 08:55:56.010768 node.go:250 weedfs:dc1:rack1:serverdc111 adds child I0603 08:55:56.010772 volume_layout.go:417 Volume 2 becomes writable I0603 08:55:56.010774 volume_layout.go:417 Volume 4 becomes writable I0603 08:55:56.010780 volume_layout.go:417 Volume 6 becomes writable I0603 08:55:56.010782 node.go:250 weedfs adds child dc2 I0603 08:55:56.010784 node.go:250 weedfs:dc2 adds child rack1 I0603 08:55:56.010785 node.go:250 weedfs:dc2:rack1 adds child serverdc211 I0603 08:55:56.010787 volume_layout.go:405 Volume 2 becomes unwritable I0603 08:55:56.010789 volume_layout.go:417 Volume 2 becomes writable I0603 08:55:56.010791 node.go:250 weedfs:dc2:rack1:serverdc211 adds child I0603 08:55:56.010794 volume_layout.go:417 Volume 3 becomes writable I0603 08:55:56.010796 volume_layout.go:417 Volume 5 becomes writable I0603 08:55:56.010800 volume_layout.go:405 Volume 6 becomes unwritable I0603 08:55:56.010806 volume_layout.go:417 Volume 6 becomes writable I0603 08:55:56.010808 node.go:250 weedfs adds child dc3 I0603 08:55:56.010810 node.go:250 weedfs:dc3 adds child rack1 I0603 08:55:56.010812 node.go:250 weedfs:dc3:rack1 adds child serverdc311 I0603 08:55:56.010815 volume_layout.go:405 Volume 1 becomes unwritable I0603 08:55:56.010817 volume_layout.go:417 Volume 1 becomes writable I0603 08:55:56.010818 node.go:250 weedfs:dc3:rack1:serverdc311 adds child I0603 08:55:56.010821 volume_layout.go:405 Volume 3 becomes unwritable I0603 08:55:56.010823 volume_layout.go:417 Volume 3 becomes writable I0603 08:55:56.010825 volume_layout.go:405 Volume 4 becomes unwritable I0603 08:55:56.010826 volume_layout.go:417 Volume 4 becomes writable I0603 08:55:56.010828 volume_layout.go:405 Volume 5 becomes unwritable I0603 08:55:56.010829 volume_layout.go:417 Volume 5 becomes writable --- PASS: TestPickForWrite (0.00s) === RUN TestVolumesBinaryState === RUN TestVolumesBinaryState/mark_true_when_copies_exist === RUN TestVolumesBinaryState/mark_true_when_no_copies_exist --- PASS: TestVolumesBinaryState (0.00s) --- PASS: TestVolumesBinaryState/mark_true_when_copies_exist (0.00s) --- PASS: TestVolumesBinaryState/mark_true_when_no_copies_exist (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/topology 0.011s === RUN TestByteParsing --- PASS: TestByteParsing (0.00s) === RUN TestSameAsJavaImplementation Now we need to generate a 256-bit key for AES 256 GCM --- PASS: TestSameAsJavaImplementation (0.00s) === RUN TestToShortFileName --- PASS: TestToShortFileName (0.00s) === RUN TestHumanReadableIntsMax --- PASS: TestHumanReadableIntsMax (0.00s) === RUN TestHumanReadableInts --- PASS: TestHumanReadableInts (0.00s) === RUN TestAsyncPool -- Executing third function -- -- Executing first function -- -- Executing second function -- -- Third Function finished -- -- Executing fourth function -- -- Second Function finished -- -- Executing fifth function -- -- First Function finished -- 1 2 3 -- Fourth fifth finished -- -- Fourth Function finished -- 4 5 --- PASS: TestAsyncPool (0.12s) === RUN TestOrderedLock ActiveLock 1 acquired lock 0 ActiveLock 3 acquired lock 0 ActiveLock 2 acquired lock 0 ActiveLock 1 released lock 0 ActiveLock 3 released lock 0 ActiveLock 2 released lock 0 ActiveLock 4 acquired lock 1 ActiveLock 6 acquired lock 0 ActiveLock 6 released lock 0 ActiveLock 5 acquired lock 0 ActiveLock 4 released lock 1 ActiveLock 5 released lock 0 ActiveLock 7 acquired lock 1 ActiveLock 7 released lock 1 ActiveLock 8 acquired lock 1 ActiveLock 8 released lock 1 ActiveLock 9 acquired lock 1 ActiveLock 9 released lock 1 ActiveLock 10 acquired lock 0 ActiveLock 10 released lock 0 ActiveLock 11 acquired lock 0 ActiveLock 12 acquired lock 0 ActiveLock 14 acquired lock 0 ActiveLock 13 acquired lock 0 ActiveLock 14 released lock 0 ActiveLock 15 acquired lock 0 ActiveLock 15 released lock 0 ActiveLock 12 released lock 0 ActiveLock 11 released lock 0 ActiveLock 13 released lock 0 ActiveLock 17 acquired lock 1 ActiveLock 17 released lock 1 ActiveLock 18 acquired lock 0 ActiveLock 19 acquired lock 0 ActiveLock 20 acquired lock 0 ActiveLock 21 acquired lock 0 ActiveLock 19 released lock 0 ActiveLock 18 released lock 0 ActiveLock 20 released lock 0 ActiveLock 21 released lock 0 ActiveLock 22 acquired lock 1 ActiveLock 22 released lock 1 ActiveLock 23 acquired lock 0 ActiveLock 24 acquired lock 0 ActiveLock 25 acquired lock 0 ActiveLock 26 acquired lock 0 ActiveLock 26 released lock 0 ActiveLock 25 released lock 0 ActiveLock 24 released lock 0 ActiveLock 23 released lock 0 ActiveLock 27 acquired lock 1 ActiveLock 27 released lock 1 ActiveLock 28 acquired lock 0 ActiveLock 30 acquired lock 0 ActiveLock 29 acquired lock 0 ActiveLock 31 acquired lock 0 ActiveLock 31 released lock 0 ActiveLock 29 released lock 0 ActiveLock 30 released lock 0 ActiveLock 28 released lock 0 ActiveLock 32 acquired lock 1 ActiveLock 32 released lock 1 ActiveLock 33 acquired lock 0 ActiveLock 34 acquired lock 0 ActiveLock 35 acquired lock 0 ActiveLock 34 released lock 0 ActiveLock 33 released lock 0 ActiveLock 35 released lock 0 ActiveLock 36 acquired lock 1 ActiveLock 36 released lock 1 ActiveLock 37 acquired lock 0 ActiveLock 38 acquired lock 0 ActiveLock 40 acquired lock 0 ActiveLock 40 released lock 0 ActiveLock 38 released lock 0 ActiveLock 37 released lock 0 ActiveLock 41 acquired lock 1 ActiveLock 41 released lock 1 ActiveLock 42 acquired lock 0 ActiveLock 43 acquired lock 0 ActiveLock 44 acquired lock 0 ActiveLock 45 acquired lock 0 ActiveLock 46 acquired lock 0 ActiveLock 47 acquired lock 0 ActiveLock 48 acquired lock 0 ActiveLock 49 acquired lock 0 ActiveLock 39 acquired lock 0 ActiveLock 50 acquired lock 0 ActiveLock 16 acquired lock 0 ActiveLock 39 released lock 0 ActiveLock 16 released lock 0 ActiveLock 50 released lock 0 ActiveLock 47 released lock 0 ActiveLock 48 released lock 0 ActiveLock 46 released lock 0 ActiveLock 49 released lock 0 ActiveLock 42 released lock 0 ActiveLock 44 released lock 0 ActiveLock 43 released lock 0 ActiveLock 45 released lock 0 --- PASS: TestOrderedLock (1.35s) === RUN TestParseMinFreeSpace --- PASS: TestParseMinFreeSpace (0.00s) === RUN TestNewQueue --- PASS: TestNewQueue (0.00s) === RUN TestEnqueueAndConsume 1 2 3 ----------------------- 4 5 6 7 ----------------------- --- PASS: TestEnqueueAndConsume (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/util 1.473s ? github.com/seaweedfs/seaweedfs/weed/util/buffer_pool [no test files] === RUN TestJobQueue enqueued 5 items dequeue 1 dequeue 2 enqueue 6 enqueue 7 dequeue ... dequeued 3 dequeue ... dequeued 4 dequeue ... dequeued 5 dequeue ... dequeued 6 dequeue ... dequeued 7 enqueue 8 enqueue 9 enqueue 10 enqueue 11 enqueue 12 dequeued 8 dequeued 9 dequeued 10 dequeued 11 dequeued 12 --- PASS: TestJobQueue (0.00s) === RUN TestJobQueueClose dequeued 1 dequeued 2 dequeued 3 dequeued 4 dequeued 5 dequeued 6 dequeued 7 --- PASS: TestJobQueueClose (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/util/buffered_queue 0.001s ? github.com/seaweedfs/seaweedfs/weed/util/buffered_writer [no test files] === RUN TestOnDisk I0603 08:55:56.009641 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk4082393654/001/c0_2_0.ldb, watermark 0, num of entries:0 I0603 08:55:56.010316 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c0_2_0.ldb... , watermark: 0 I0603 08:55:56.010617 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk4082393654/001/c0_2_1.ldb, watermark 0, num of entries:0 I0603 08:55:56.011421 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c0_2_1.ldb... , watermark: 0 I0603 08:55:56.012486 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk4082393654/001/c1_3_0.ldb, watermark 0, num of entries:0 I0603 08:55:56.013454 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c1_3_0.ldb... , watermark: 0 I0603 08:55:56.014019 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk4082393654/001/c1_3_1.ldb, watermark 0, num of entries:0 I0603 08:55:56.014960 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c1_3_1.ldb... , watermark: 0 I0603 08:55:56.015326 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk4082393654/001/c1_3_2.ldb, watermark 0, num of entries:0 I0603 08:55:56.015781 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c1_3_2.ldb... , watermark: 0 I0603 08:55:56.016805 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk4082393654/001/c2_2_0.ldb, watermark 0, num of entries:0 I0603 08:55:56.017095 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c2_2_0.ldb... , watermark: 0 I0603 08:55:56.017857 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk4082393654/001/c2_2_1.ldb, watermark 0, num of entries:0 I0603 08:55:56.018674 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c2_2_1.ldb... , watermark: 0 I0603 08:55:56.019011 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk4082393654/001/c0_2_0.ldb, watermark 0, num of entries:0 I0603 08:55:56.019285 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c0_2_0.ldb... , watermark: 0 I0603 08:55:56.019710 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk4082393654/001/c0_2_1.ldb, watermark 0, num of entries:0 I0603 08:55:56.020702 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c0_2_1.ldb... , watermark: 0 I0603 08:55:56.021772 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk4082393654/001/c0_2_0.ldb, watermark 0, num of entries:2 I0603 08:55:56.022545 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c0_2_0.ldb... , watermark: 0 I0603 08:55:56.024039 needle_map_leveldb.go:122 generateLevelDbFile /tmp/TestOnDisk4082393654/001/c0_2_1.ldb, watermark 0, num of entries:1 I0603 08:55:56.024386 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c0_2_1.ldb... , watermark: 0 I0603 08:55:56.025173 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c1_3_0.ldb... , watermark: 0 I0603 08:55:56.025632 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c1_3_1.ldb... , watermark: 0 I0603 08:55:56.025991 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c1_3_2.ldb... , watermark: 0 I0603 08:55:56.026887 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c2_2_0.ldb... , watermark: 0 I0603 08:55:56.027291 needle_map_leveldb.go:66 Loading /tmp/TestOnDisk4082393654/001/c2_2_1.ldb... , watermark: 0 chunk_cache_on_disk_test.go:98: failed to write to and read from cache: 2 --- FAIL: TestOnDisk (0.02s) FAIL FAIL github.com/seaweedfs/seaweedfs/weed/util/chunk_cache 0.028s ? github.com/seaweedfs/seaweedfs/weed/util/fla9 [no test files] ? github.com/seaweedfs/seaweedfs/weed/util/grace [no test files] ? github.com/seaweedfs/seaweedfs/weed/util/http [no test files] ? github.com/seaweedfs/seaweedfs/weed/util/http/client [no test files] ? github.com/seaweedfs/seaweedfs/weed/util/httpdown [no test files] === RUN TestNewLogBufferFirstBuffer processed all messages E0603 08:55:56.042253 log_read.go:115 LoopProcessLogData: test process log entry 1 ts_ns:1748940956042231563 partition_key_hash:-736260903 data:"šr\xe5֐l3\xbc7\xca\x12\xf9\xa6\n\xbe>[ؐ\x1al\x87\xf4\xfd\x95mM\xc3\xf5%\xfd\xc1\xd7ɱ\x11\x90\x1b\x0b\xb6n \xb0\xca\x18\x95+\x99\xc2֍ \x0ct\x9eOW[^\xf7\xbd\x06#d\xa2{L'\x83\xc4G\x92\x86&\xf2}\xb6\x82kK^\xdc\x0e\xb4\x1b\x16ᩭ\x1f\xdf/V\xc1O\xd1\xfd3\xae\xd4\xf3\xfdM\xc4\xe66\xc2fD\n\xbf=u\x93\xd4\xeb\x12\xdd\x07)\xcb\xce\t\x81\x95\x80\xbf\xb1\n\x00\n.\x11x\xbfQu\x0c\x1bi\xbb\xfaݾI\xf90\xde\xe8ʜ\x14\x15\xb9\n\xdc\x04\x02hK\x11Y7k\xd4\xd4R\xdc\xd2\r\xf1\xe2\x1f*\xfb\x04\x1b\x99ǣ\x9bd\x98\x88}T\xa6ޠ?%ٟ\xb3\xdf\x04%Q\xb1\x17xΕ \xb8\x99\xea|\x1f/\x12\xad\xab[\xcca\x05S\x0b\x8eR\x17br2$Lu\xa1\x8eFQ\xde&렢\xa3\x84\xb4r\xba;\xf2)\xb8U\xcb\xed\xa3\xfa\xa0i\x8b|A\x9c\xa7_M`\xd0\x0e\xeb5k\x96]\xbbl\x05\xbb\xb2\x97(*\x97$\xf7\x8d\u0088Q\xf1a\xd9\x07s\x10\x97\x17\xd1u\x13\xa2\x0e\xba\x8cS\xc3\xc0\xdf\x0f\n\x9e\xb3N\x06\x15\xe6\xdeB4\xc0\x024:ʼn\x7f\x1c7\r;\x87\xae\x1dT\x8f\x897G\xd6[\x97!\x9a\x89\xfa\x1e\xa1e\xfd΋>bt\x1eP>Y\xee\x81\xe5{&\n\xa1\xabZ\xe2\xb8\t\x06\x14\xc8\xf1( \xe6 \xb9+\x98*.)\r\x8a\xdfI\xb9Z\xea\xbdXQ\x97=\xf7;\xe6\xe47\x8dc\x96\xe7\xd1/슰2X\x1d4\xdd\xf0\xfaC\x14G\xcbZVb+\x18\xf0\x0e\x83~\x8c\r\xb9\xe0\xdc-(X\xce\x13\xea|\xc8ߔD 6-\x0c\x8au\x92\xd1E^!\x0fR\x11zE=\x001ɴ1\xc7^h\xfb>\x90\xfbp?P\x98\xa6v^\x0e\x93\x93 )qj\xb1\xc5\xcd\x1d\xb5dx\xd97pߍ\xa0\xfcP\x8e@\xb3#\xf5q&@[dW\xbf\xde1_\xe56\x7f\x9cu8\xafv\xf6 g\x91N\xe8HN°\x8bn\xbd\x91\x8d\xd0L\x0b\xb40Z\x84\x83\x93\xc5j\xc7D٠\xcf\nQ\x94z\n\xeb(\x91\x92\x8dԲ\x13eP\x8a,\x8c\xb4\x1bp0\xd54H\x90E\xeb\x95\xfd\x8d\xd6\xc0\x00ѵ\x13\x1a&\xb12\xfb\x08\n\xfe\xb9\xeaJwR\xb9+`<\x95[Mp\xe93|\xa9Ƭ\x01S}\xe8\x92C\x02\xb3*\x8d\xc1i\xbb\x89\xd6\"\xdb+\xb8,+\xfb\xec\xb6N\xc6\xea\xae\xe7^3Z\x08h\xd9\x16\x89\xb3(\x17ѻ\x11;\x12}N\x1c\xb3\xc0\xffu\xf4\xf8\xb7\xdfd\x1b\xaa\xc8\xce7\x1f\xfan\x01~[\xd27w2\x14\xa8\xb6\xad_\xd5\xe8\xba\xddp\r\xb7M\x9d\x8a\x92r\x00G\x06\xd6\xee\x93n\n\xae\xbcs=T\xc6\xd6\xea\x17\x88G`?Q\x8b\xee\xc9`\xe4\xc4\xdfO\x94\x074\xa2\x87\xa9\x1c\xb8d\xfe\"\x81\xea\x0bt\xb6zVݐ\xbaO\xc8{sA\xf8/\x95\xf1\xa0k\xb2~K\xfd\x7fcY\x0e\x87`9\xb8b\xa1\xc8A\xb1\xe7\xb4Z\xb3\xcf\xe7\xe0b}s\xa9\x16!\x93\x95\xfa\x1dJ\x97\xff)H\xcf\xf7\xe6\xa6\x06\x98\x8eE%,&3\xc8\xd52\xa8\x19\xa2l\x9bu\x96eit6\xf7*p\x04\xb1\t\xbb\x10\x16S\xc7k\x1c\xba\x0b\xb2\x9d\x0b}\xc9\xf4\xd0\xfa\xf3\\\xe3\x9aY\x8d\xa1\xe6\x8f\xec\x8e kV\xf87r2\xc3\xc8\"\xc9\x19n\x02\xa6\x9b\xc7%\x05\x99|\x90\x8d\xbf\x034\xc7\x0fb\xd6[\xec\xeb\xf4\xe7\xce\xc1\x11\xcbf\x0b\xdb;\xa9r\xf8\x195S\xb1U\xec4bص\x00\x11\xe8\x16N\xce\t\x00ȥ`\xe1x\x8b\x91\xfbH\x00n\xf7\xfa\x8f&>\x88\xad\xb2\xa0bM\x14\xe2\x96A\xad\xe8^c\x94[\x90\xbc\xf4\x1b\xacs\x82\xdd\xd2f\x9br\xe0\t\xa0I\x86\x9f9x\xdbK^\x1bՒ\"Ur\x87\x8d\x89\x7fp\xa4B\xe1ڱ\x1b\x9d\xb8\xd4\x08\xa9\xfcp\xf1\xff\x80\xaa-\x0b\x1flp\xf34m\x7f\xe0": EOF before flush: sent 5000 received 5000 lastProcessedTime 2025-06-03 08:55:56.042231563 +0000 UTC isDone true err: EOF --- PASS: TestNewLogBufferFirstBuffer (0.04s) PASS ok github.com/seaweedfs/seaweedfs/weed/util/log_buffer 0.042s === RUN TestAllocateFree --- PASS: TestAllocateFree (0.00s) === RUN TestAllocateFreeEdgeCases --- PASS: TestAllocateFreeEdgeCases (0.00s) === RUN TestBitCount --- PASS: TestBitCount (0.00s) PASS ok github.com/seaweedfs/seaweedfs/weed/util/mem 0.002s === RUN TestNameList 0 1 10 11 12 13 14 15 16 17 18 19 2 20 21 22 23 24 25 26 27 28 29 3 30 31 32 33 34 35 36 37 38 39 4 40 41 42 43 44 45 46 47 48 49 5 50 51 52 53 54 55 56 57 58 59 6 60 61 62 63 64 65 66 67 68 69 7 70 71 72 73 74 75 76 77 78 79 8 80 81 82 83 84 85 86 87 88 89 9 90 91 92 93 94 95 96 97 98 99 --- PASS: TestNameList (0.05s) === RUN TestReverseInsert --- PASS: TestReverseInsert (0.00s) === RUN TestInsertAndFind --- PASS: TestInsertAndFind (0.04s) === RUN TestDelete --- PASS: TestDelete (0.04s) === RUN TestNext --- PASS: TestNext (0.01s) === RUN TestPrev --- PASS: TestPrev (0.01s) === RUN TestFindGreaterOrEqual --- PASS: TestFindGreaterOrEqual (0.02s) === RUN TestChangeValue --- PASS: TestChangeValue (0.02s) PASS ok github.com/seaweedfs/seaweedfs/weed/util/skiplist 0.186s === RUN TestLocationIndex --- PASS: TestLocationIndex (0.00s) === RUN TestLookupFileId --- PASS: TestLookupFileId (0.00s) === RUN TestConcurrentGetLocations --- PASS: TestConcurrentGetLocations (0.91s) PASS ok github.com/seaweedfs/seaweedfs/weed/wdclient 0.913s ? github.com/seaweedfs/seaweedfs/weed/wdclient/exclusive_locks [no test files] ? github.com/seaweedfs/seaweedfs/weed/wdclient/net2 [no test files] ? github.com/seaweedfs/seaweedfs/weed/wdclient/resource_pool [no test files] FAIL ==> ERROR: A failure occurred in check(). Aborting... ==> ERROR: Build failed, check /home/alhp/workspace/chroot/build_b7cbdcd8-e947-4c00-b258-8f106e541c5e/build