-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
scaff_reads: Segmentation fault #17
Comments
Hi, Thanks for the email. When you run /lustre/scratch117/sciops/team117/hpag/zn1/project/bird/hummingbird/QC/10x/bCalAnn1_S1_L001_R1_001.fastq.gz this will produce a temporary directory which contains all the files. Could you do ls -lrt and send me the file list? Best regards, Zemin Ning |
oops~ I deleted them all.
I think scaff_reads can only handle a certain volume of reads. Because
scaff_reads works after I split the giant fastq file into ~15 files
(6Gb gzipped file).
…On Mon, May 11, 2020 at 11:30 AM Zemin Ning ***@***.***> wrote:
Hi,
Thanks for the email. When you run
/lustre/scratch117/sciops/team117/hpag/zn1/project/bird/hummingbird/QC/10x/bCalAnn1_S1_L001_R1_001.fastq.gz
this will produce a temporary directory which contains all the files.
Could you do ls -lrt and send me the file list?
Best regards,
Zemin Ning
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#17 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANTNC7VEYEBQXDFKQLTF5LTRQ7AUPANCNFSM4M44ZNAQ>
.
--
*Zongji Wang*
|
You can run scaff10x directly, rather than run staff_reads to get two read files, basically you don't need them. It saves disk space when you use "-data file.dat". |
Hi there,
scaff_reads can only produce genome-BC_1.fastq.gz, but not genome-BC_2.fastq.gz. Here I list information as follows. Please help to fix it.
Thanks a lot
command line:
Scaff10X/src/scaff_reads -nodes 30 input.dat genome-BC_1.fastq.gz genome-BC_2.fastq.gz
# error information
58211 Segmentation fault Scaff10X/src/scaff-bin/scaff_BC-reads-2 MySample_S1_L001_R1_001.fastq.name MySample_S1_L001_R2_001.fastq MySample_S1_L001_R2_001.fastq.RC2 > try.out
cups & mem
#SBATCH --cpus-per-task=30
#SBATCH --mem=500G
input.dat
q1=MySample_S1_L001_R1_001.fastq (base size: 150Gb )
q2=MySample_S1_L001_R2_001.fastq (base size: 150Gb)
fastq
@CL100073098L1C001R001_7
CCCAATGGGACAATGGCAGGGCTGCCTATGGGGGAACCGGCATTGCTGTGAGGGTCGGGGGGACTATTGTATCTGTAAAGGATCAGCCATGGCCAGAAGTAGGTTTCTGAGCTGAGCGGTGACAGACTGTGCCCTTTTCCTGGCAGGAGG
+
@:GFFDFFFFF9FFFCFGFFDFGEFFE@FFFFFDEF@FF:F?DFGFF@FGFFFEDFEFFFGF;=FFFFGFFGGFFEBFCGE5F>BCFFBFFFFFFCF1BFAGFCDEFEF7FEFF,FGFADF3DBD=FFC84FFFGBFF7GF:DFDFF=EF
@CL100073098L1C001R001_9
CTGCGTTTCGCGGCATGCTTTCTAGAAGCTTAAGTTGTCTGTTTTTCCACCCTCCAAATTGTCTGACCACTTGTTGATAGTAGCAATTCCATTTTAATACCTTATGTCATAAGTATTTTAAGCAACCAAAAGATTCCTTTATTTTTTGCA
+
FFFGFFFFFGGB;FFGGFEGEGEEGGCFGEEE=GFFGEGEGFFGGCEGFGDFGBFFBBGFEGDFEFEGBEFBBGGG:GGBDFDFDGGGF?ECF@F@GEAEAEEEEEGF>GDFDEEEECFF,GFFFE1FGGBEGCEG@EAC?DCGEAEB5@
The text was updated successfully, but these errors were encountered: