This repository is based on DallasMorningNews/s3replace
Sometimes we have to replace the same thing in lots of files in an S3 bucket - for example, when the paywall code changes or when we switch commenting vendors. This repo is our automated solution.
It uses the AWS API to roll through all of the objects in a bucket:
- Filtering the objects to search using a regular expression, it downloads any object that matches.
- Of those objects that match, it uses another regular expression to check does the content match to the requirements.
- Then iterate over on an array of regular expressions to find the relevant code to replace.
- If the object's content is a match:
- In the standard mode you'll be given a preview and asked for confirmation before anything is changed.
- In the force mode the file will be overridden without confirmation.
- In the dry run mode only the preview will be shown.
- It replaces the code, copying metadata such as the
ContentType,ContentDispositionand other key fields.- A backup of the file is saved locally, just in case.
- Log for each change or match will be written to a log file.
- Python 3 - in Mac
brew install pythonin Debian it is installed by default pipenv- in Macbrew install pipenvin Debiansudo apt install pipenv
- Clone this repo
- Install requirements using
pipenv.
In s3replace/main.py:
-
Update the
needle_patternat the top. This pattern will be used byre.searchto find matching documents and it'll be the content that is replaced usingre.sub. -
Set the
needle_pattern_listarray, each pair consists of:- set
replace_withat the top of the file to the text you want to replace theneedle_patternwith - update the
key_patternvariable to match the keys you want to runneedle_patternagainst; the more specific this is, the better; files that match this won't be downloaded, which is the slowest part of the process
- set
-
needle_pattern_max_countis the number of timesneedle_patternshould be found in the file before it is replaced, otherwise the file will be skipped, and a copy of it will be saved inbackups/too_many_matches
The script automatically creates backup of the original content of the objects/files im backups/. If a backup copy exists it it won't be overwritten.
The script also writes log files in the logs/ directory.
This runs as a command line tool. See all the options by running python s3replace --help:
python s3replace --helpFind and replace for an S3 bucket.
Usage:
s3replace <bucket> [--dry-run|--force-replace] --access-key-id=<key> --secret-access-key=<key>
s3replace -h | --help
s3replace --version
Options:
--help Show this screen.
--dry-run Do not replace.
--force-replace Replace without confirmation.
--version Show version.
--access-key-id=<key> AWS access key ID
--secret-access-key=<key> AWS secret access keyBasic usage only requires a bucket name and credentials:
python s3replace <bucket> --dry-run --access-key-id=<your_id> --secret-access-key=<your_key>You can pass your AWS credentials using the flags, as above, or you can provide them using any of the other methods supported by boto3.
find backups/ -type f -printf '%d\t%P\n' | sort -r -nk1 | cut -f2-Upload test content.
aws --profile plexop s3 cp /mnt/c/xampp/htdocs/Creative/aserving-4/0-szs-test/ "s3://static-plexop/aserving/4/0/" --recursiveCreate a list of the changed files/pages from the logs.
cat logs/*log | grep -Po '^Replace object:.*$' | sort -u | sed 's#Replace object: #https://static-plexop.s3.amazonaws.com/#' > logs/pages.logCreate a list of the changed files/pages from the backup directory.
cd backups
find aserving/ -type f | sort -u | sed 's#^#https://static-plexop.s3.amazonaws.com/#' > pages.logCheck if there are any differences between the two lists.
diff -c backups/pages.log logs/pages.logCreate backup of the collected data.
tar -zcvf backup.tgz backups logs