Autumn
About 527 wordsAbout 2 min
2024-06-04
Tips
During the scanning task, if the node is offline (docker will automatically restart), the program will automatically create a task to allow the node to continue running the unfinished task.
Customize the task name
One target per line, and the target with a wildcard is best placed at the bottom (to prevent inconsistent task allocation).
You can select the node to run the task. When you select a node by clicking All
, a new node will automatically join this task when it is connected. Otherwise, the scanning task will only be performed in the selected node.
After selecting a scheduled task, the system will run the task periodically according to the scheduled time.
When selecting subdomain deduplication scanning, the historical subdomain will be searched in the database. If it already exists, it will not be scanned.
When selecting port deduplication, only the ports where the target has no assets will be scanned during the port scanning process.
Refer to https://github.com/projectdiscovery/subfinder
Stateless subdomain blasting, refer to https://github.com/boy-hack/ksubdomain
The subdomain dictionary is the subdomain dictionary in the dictionary management.
Select the port created in the port scanning dictionary to scan and perform port fingerprinting.
The directory scanning dictionary is set in the dictionary management
Discover more URLs from the target
Tips
Scanning for sensitive information will take up more CPU resources (may cause the program to run slowly depending on the number of URLs crawled, if possible, please change to a machine with a larger CPU), please configure it reasonably (or enable a small number of sensitive information rules). If you have a good optimization method, please contact me~~
Depends on URL scanning, sensitive information scanning can only be performed after URL scanning is enabled
Refer to https://github.com/tomnomnom/waybackurls
Discover more URLs
ALL: All discovered URLs will be monitored for changes
JS: Only monitor changes in the content of JS URLs
If URL scanning is enabled, the crawler input will be added to the URL scan results. (It consumes more memory and CPU, please configure parallel threads according to the system.)
Crawler configuration is in rad.
In the vulnerability scanning list, the first one is ALL POC. After selecting ALL POC, all POCs will be run.
Scheduled tasks are divided into three types:
On the configuration page, you can select the node to run page monitoring.
On the data page, you can add and delete the monitoring URL list.
For the other two types of scheduled tasks, you can view the task progress, and the task progress will be cleared each time the scheduled task starts.