Split metadata size exceeded 10000000

java.io.IOException: Split metadata size exceeded 10000000. was the error I got when trying to process ~20TB of highly compressed logs (~100TB uncompressed) on my 64 node Amazon EMR cluster. Naturally I found some good resources recommending a quick file by modifying the mapred-site.xml file in /home/hadoop/conf/ Warning: By setting this configuration to -1, you are… Continue reading


Now using S4CMD

S3CMD’s distinct lack of multi-threading led me to hunt for alternatives. While I tried many alternatives, such as s3-multipart (great when I did use it), s3funnel and s3cp among others, none quite fit the bill of supporting the key features I found important. 1) Listing/Downloading/Uploading/etc of files and “folders” 2) Multi-threaded 3) Synchronization handled so… Continue reading