- Sponsor
-
Notifications
You must be signed in to change notification settings - Fork 6k
Open
Description
- Sqlmap overwrite previous dumped table if start the same scan again, needed a switch which let deside user verwrite or not
- If i dump table for example named "test" where are 70.000 entries and start dump the same table or use --dump-all, sqlmap got stucked on this table for long time, needed to skip already full dumped tables, and if a table not full dumped then start dump from last entry
- If OS where running python is restarting, shutdown or get Blue Screen of Death, now imaging what sqlmap runned for dumping 1 tables with 1.000.000 entries and do this for last 5 days, and dump most of them but not all, so when start sqlmap after thoose issues sqlmap cant resume scaning, because sqlmap in this way do everithing from beginning. But session.sqlite seems like contain already dumped info but ..... So needed what sqlmap saving to dump file not to session.sqlite each 10 min and not waiting for full dump to save results to file.
Metadata
Metadata
Assignees
Type
Projects
Milestone
Relationships
Development
Select code repository
Activity
stamparm commentedon Sep 30, 2014
and if a table not full dumped then start dump from last entry
-> sqlmap resumes queries stored in session file to minimize number of repeated requests. We can't use "resume from last entry" as sqlmap sees everything as query<->replyI can feel your frustration, but writing a partial table dump which would be totally different from the end formatted and filtered output would just start new issues.
mukareste commentedon Sep 30, 2014
I am trying to figure out a legitimate reason to dump 70K entries. Is this a requirement of some sort of a penetration test?
[-]Extensions[/-][+]Dump and CSV file[/+]ghost commentedon Jun 4, 2016
Performing a backup of old dump file (Issue #841)
stamparm commentedon Jun 5, 2016
@lukapusic with latest commit old table dumps are "backup"-ed. For example, after a couple of runs: