On the interpretation of results from the NIST statistical test suite
Authors | |
---|---|
Year of publication | 2015 |
Type | Article in Periodical |
Magazine / Source | Romanian Journal of Information Science and Technology |
MU Faculty or unit | |
Citation | |
Field | Informatics |
Keywords | Hypothesis testing; NIST STS; Statistical randomness testing |
Description | NIST Statistical Test Suite is an important testing suite for randomness analysis often used for formal certifications or approvals. Documentation of the NIST STS gives some guidance on how to interpret results of the NIST STS but interpretation is not clear enough or it uses just approximated values. Moreover NIST considers data to be random if all tests are passed yet even truly random data shows a high probability (80%) of failing at least one NIST STS test. If data fail some tests the NIST STS recommends the analysis of different samples. We analysed 819200 sequences (100 GB of data) produced by a physical source of randomness (quantum random number generator) in order to interpret results computed without analysing any additional samples. The results indicate that data can be still considered random for the significance level a = 0.01 if they fail less than 7 NIST STS tests, 7 tests of uniformity of p-values (100 sequences) or 10 tests of proportion of passing sequences. We have also defined a more accurate interval of acceptable proportions computed with a new constant (2.6 instead of 3) for which 1000 sequences can be considered random if they fail less than 7 tests of proportion. |
Related projects: |