You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the feature request or bug or other
I am trying to run a rather big wordlist 500mb+ and the python process ends up eating all my memory 32gb+.
It seems the code is instructed to hold all results in memory until complete. A suggestion would be to add some sort of chunking based on the size of the file or something. Maybe just a set number of lines for each "task".
To Reproduce
Steps to reproduce the behaviour:
Run tool like this: 'python dnsrecon.py -d domainwhatever.com -D first_level_subdomains_wordlist.txt -t brt -c output.csv'
Memory usage through the roof
Expected behaviour
A more efficient way of handle bigger wordlist that does not consume that much ram
Screenshots
System Information (System that tool is running on):
OS: Linux in WSL (Also tested on Ubuntu)
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
Other
Other
Describe the feature request or bug or other
I am trying to run a rather big wordlist 500mb+ and the python process ends up eating all my memory 32gb+.
It seems the code is instructed to hold all results in memory until complete. A suggestion would be to add some sort of chunking based on the size of the file or something. Maybe just a set number of lines for each "task".
To Reproduce
Steps to reproduce the behaviour:
Expected behaviour
A more efficient way of handle bigger wordlist that does not consume that much ram
Screenshots
System Information (System that tool is running on):
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: