Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

uploading big files and minor issues in the downloaded results files #3

Open
rosherbal opened this issue Feb 15, 2019 · 1 comment
Open
Assignees

Comments

@rosherbal
Copy link

rosherbal commented Feb 15, 2019

Dear Mutfunc Team,

First of all, I would like to congratulate you for the nice paper and the webserver. I have been working with it this week and it is good. However, I have encountered some issues that I hope you can solve:

  • Uploading files. My first problem came when I tried to upload a file containing around 317.500 pairs of protein_id/variant in the format you required (selecting the Human database). The server just gave me an error messenger any time I tried it. I didn't give up, so I decided to try with a smaller dataset (around 11.996). This time, I did also have the same error from the server, but the third time I tried, it worked.

  • Downloaded result files. Once I downloaded the results, and I started to parse the files, I also found some minor issues:
    - TFBS.tab file: In the commented section, from the 14th position, the numbering is misplaced due to a duplicate 14 position.
    - linear_motifs.tab file: I seems like you have a translocation of the column 17th (pattern) that went to the 14th position (ELM accession). Thus, all the columns are misplaced one position from the 14th to the 17th.

    • start_stop.tab file: doesn't contain the header.

So far, these are all my issues. I hope they allow you to improve these great tool.

All the best,

Rosa

@mgalardini mgalardini self-assigned this Feb 15, 2019
@mgalardini
Copy link
Collaborator

Hi Rosa,

thanks for reporting these errors and inconsistencies. The exported files should be an easy fix, but I believe that allowing a large number of queries at the same time might be difficult to achieve. Maybe we can implement a warning or an error message in those cases. We are also working on releasing the raw data so that large queries can be run with no problems. I'll keep you posted.

Best,
M

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants