The aim is to set up three servers that would mimic (and finaly be) instances that runs on the South-African infrastructure, Mali ACE infrastructure and Ugande ACE infrastructure.
The idea is to make use of three GA4GH standards, Data Connnect, DRS and WES. The plan is to later incorporate Passports in the design but for now we use firewall rules to allow only access between nodes.
The user case would be. We will make use of the 1000 Genome data. We've selected the ACE2 region from the full genome CRAMs, indexed those to create a more working version of the data. We divided the data into three batches and the DRS and servers will host a specific batch at the corresponding instances. The Data Connect server will contain CRAM/CRAI access details (DRS ids) from all the DRS servers. A user would query the Data Connect servers, select the DRS CRAM objects based on the query and submit those to a WES endpoint. Queries on Data Connect can be done using sample id, population group, super population group and sex. The WES endpoint will process some stats on the CRAM files and generate a combined MultiQC report.
The setup was followed from here
Additionally a database was created for our purposes and populated
- Create the following table:
CREATE TABLE genome_ilifu (
sample_id VARCHAR(36) PRIMARY KEY,
population_id VARCHAR(36) NOT NULL,
super_population_id VARCHAR(36) NOT NULL,
sex VARCHAR(36) NOT NULL,
cram_drs_id VARCHAR(10485760),
crai_drs_id VARCHAR(10485760)
);
- Grant all rights to our user 'dataconnecttrino':
GRANT ALL PRIVILEGES on TABLE genome_ilifu to dataconnecttrino
- Then Run the script
genome_ilifu.sql
or copy and run it against Postgresql
The cram_drs_id
and crai_drs_id
was calcualted based on the md5sum string version of the full file path. This was also how it was added as the DRS id in the DRS database.
Did the setup here and ran from source.
Dbs, configs and scripts for each node are in the resources
folder.
The Python notebook, populate-db.ipynb
, populates the sqlite database with test data. As previously mentioned he hashlib md5 function is used to create the checksum for each file using its full path and use it as the identifier for the DRS object. The DRS object ID, file path, and other information is uploaded to the server database using an HTTP POST request.
Did the setup here and ran from source.
Dbs, configs and scripts for each node are in the resources
folder.
Jupyter notebook orchestrator implementing use two cases are here