OS: Linux
I have a situation where I usually get more than 30 GB file Is using 2 GB to split and then press each partition on S3.
Unfortunately now I run the partition and then when it ends, I take each partition in parallel.
I want to divide and as soon as the first part file is being written, execute the push command and do this for each part file as it is done. This will save me the allocated time and reduce the amount of flood time to the system's outgoing band.
Anyway I saw - XAC or something like that on partition and it does not exist anymore. I can write scripts to see split to die and act accordingly, but I thought I would ask that someone has a command which already does that I am still unaware of.
In advance thanks
You can work with a loop with DD < / P>
#gen A test file DD if = / dev / urandom BS = 1K count = 1024 = test.bin source file = "test.bin" It should be easily scripted (bash): bsize = $ ((128 * 1024)) flength = $ (Stat --printf =% s "$ sourcefile") / $ bsize))); What to do if = "$ sourcefile" bs = $ bsize skip = $ i count = 1 2 & gt; / Dev / null; Did Verify md5sum "$ sourcefile" for M in Md5sum # $ (seq $ randomo); Do echo hello & gt; & Gt; "$ Sourcefile"; $ Flang = $ (stat --printf =% s "$ sourcefile") in $ (Seq 0 $ ((($ flength-1) / $ bsize);); What to do if = "$ sourcefile" bs = $ bsize skip = $ i count = 1 2 & gt; / Dev / null; Did Validate MD5sum # md5sum "$ sourcefile" This works like a charm, generates the following output
records in 1024 + 0 1024 + 0 1,048,576 went copied bytes (1.0 MB), 0.27551 s, 3.8 MB / s d73c5a920dae16861983c95d8fb1e94b - d73c5a920dae16861983c95d8fb1e94b test.bin d14ae9ae62652bc7768b076226a6320a - d14ae9ae62652bc7768b076226a6320a test.bin the challenge of piping I info Now let's take a look at your network job Mkfifo for you, leaving each sub-work in a separate FIFO Crop and probably use Aksaarji-PN to split into parallel jobs if it is of any value (usually bandwidth limiting factor).
Comments
Post a Comment