Parameter discovery tools comparison

Thu, Jul 1, 2021 4-minute read

Some people asked me about publishing a comparison between x8 and other major tools for parameter discovery: Arjun and Param Miner, so here it is!

Parameter discovery tools help to find parameters that can be vulnerable or able to reveal some hidden features. In this post, I am going to check the speed and accuracy of these tools. For tests, I used a wordlist with 26k parameters. If you don’t have time to read the whole post - you can go directly to the summary at the end of a page.


x8 v2.0.0

Used –disable-custom-parameters flag because none of the other testing tools has this functionality.

arjun v2.1.3

Used -c 256 flag because the initial amount of parameters per request is too huge and some pages ignore the rest of the parameters or throw some errors. Also, I modified because it causes the tool to stop on 400 HTTP code.

param miner v1.28

Used disable origin cachebuster, disable basic wordlist, force bucketsize = 256 (sometimes works very bad and sends 6-12 parameters per request), disable response (this flag allows the tool to search parameters in every response. I don’t like it because sometimes it increases the number of requests by a few times), use custom wordlist flags.
Default request:

Host: host
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36
Accept-Charset: utf-8, iso-8859-1;q=0.5, *;q=0.1
Accept-Language: en-US, *;q=0.5
Accept: */*


To perform the comparison I chose my test domain and a few popular domains like,,,, then I ran a crawler on these domains and selected the most interesting paths.
Some info about custom targets: and contain a lot of different parameters that, I believe, can cover most of the real-life cases.



The next tables show the statistic across 10 endpoints.

Parameters found by every tool and how many requests did it take

x8 requests parameters 231 admin, copy, email, facebook, test, z
* 104 email, role, tag, username 217
37 parameters ad, client, complete, cr, dnr, domains, gc, gcs, gl, gll, gm, gpc, gr, h, host, hq, imgtype, imgurl, interests, lr, lsf, pws, q, query, rcu, rls, rlz, sab, si, sie, sky, sz, tbm, tnm, ur, v, w 147 rs, sqp\_search 156 as\_epq, as\_eq, as\_filetype, as\_nhi, as\_nlo, as\_oq, as\_q, as\_sitesearch, cr, q, query, tbm 165 from, tag 148 page, q, return\_to, utm\_campaign, utm\_medium, utm\_source, utm\_term 147 rs, sqp 130 auth 156 auth, bp, cbr, cos, pbj, spf
arjun reqs parameters 167 z, facebook, test, email
* infinity loop of requests 119 tbm 135 rs, sqp\_search 124 tbm 103 133 page, id 133 rs, sqp 106 auth 105
param miner reqs parameters 372 copy, test, z
* 132 email, tag, username 1178 ad, client, complete, cr, domains, tbm, tnm, lr, pws, rcu, rlz, tnm, ur 255 rs, sqp
**\_search 429 as\_epq, as\_eq, as\_filetype, as\_nhi, as\_nlo, as\_oq, as\_q, as\_sitesearch, cr, q, query, tbm 294 from, tag 132 253 rs, sqp 179 auth 292 auth, bp

* - send parameters via json body. 512 parameters per request
** - as_parameters were manually added to the list because I disabled searching words in the response

Average number of requests needed for 1 parameter

tool requests per parameter
x8 54
arjun 85
param miner 118

I removed from the count in this and the second table because 45% of the parameters were found there.

Missing parameters

tool Count %
x8 1 2
arjun 29 70
param miner 16 36


The next table represents a speed of each tool. Target used - on localhost. I am making comparisons on my laptop with:
OS: 5.12.9-arch1-1
CPU: Intel i3-7020U

tool size=10(300kb) size=25(750kb) size=50(1500kb) speed
* ** x8 10.144s 22.232s 44.784s 1
x8 7 threads 9.360s 22.085s 44.288s
arjun 14.174s 28.956s 52.904s 0.8
arjun 7 threads 13.161s 28.821s 53.768s
param miner 10s 37s 61s 0.71

*** - Force 256 parameters per request as well as in other tools.


# tool requests per parameter accuracy speed
1 x8 54 98% 1
2 param miner 118 64% 0.71
3 arjun 85 30% 0.8

Final thoughts & conclusion

Anyway, some stats can be very inaccurate due to the small number of test endpoints and the inability to know the exact number of parameters, but yet they are able to show a rough picture. Most of the time param miner and arjun fails to detect parameters with a different number of reflections and some difficult cases.

Feel free to suggest other tools and endpoints in telegram or twitter. If you believe you found a mistake in the data - compare the versions of your tools with the tested versions and make sure you run the tool at least 3-4 times because sometimes results can be different each run. If the version of tools is correct and the main part of tries gives you different results - write to me.