Implementation Notes for the Soft Cosine Measure
Authors | |
---|---|
Year of publication | 2018 |
Type | Article in Proceedings |
Conference | Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM '18) |
MU Faculty or unit | |
Citation | |
Web | |
Doi | http://dx.doi.org/10.1145/3269206.3269317 |
Keywords | Vector Space Model; computational complexity; similarity measure |
Attached files | |
Description | The standard bag-of-words vector space model (VSM) is efficient, and ubiquitous in information retrieval, but it underestimates the similarity of documents with the same meaning, but different terminology. To overcome this limitation, Sidorov et al. proposed the Soft Cosine Measure (SCM) that incorporates term similarity relations. Charlet and Damnati showed that the SCM is highly effective in question answering (QA) systems. However, the orthonormalization algorithm proposed by Sidorov et al. has an impractical time complexity of O(n^4), where n is the size of the vocabulary. In this paper, we prove a tighter lower worst-case time complexity bound of O(n^3). We also present an algorithm for computing the similarity between documents and we show that its worst-case time complexity is O(1) given realistic conditions. Lastly, we describe implementation in general-purpose vector databases such as Annoy, and Faiss and in the inverted indices of text search engines such as Apache Lucene, and ElasticSearch. Our results enable the deployment of the SCM in real-world information retrieval systems. |
Related projects: |