Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
4.10.4
-
None
-
New
Description
With a very large index (in our case > 10G), we are seeing exceptions like:
java.lang.ArrayIndexOutOfBoundsException: -62400
at org.apache.lucene.util.PagedBytes$Reader.fill(PagedBytes.java:116)
at org.apache.lucene.search.FieldCacheImpl$BinaryDocValuesImpl$1.get(FieldCacheImpl.java:1342)
at org.apache.lucene.search.join.TermsCollector$SV.collect(TermsCollector.java:106)
at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:193)
at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)
at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)
The code in question is trying to allocate an array with a negative size. We believe the source of the error is in org.apache.lucene.search.FieldCacheImpl$BinaryDocValuesImpl$1.get where the following code occurs:
final int pointer = (int) docToOffset.get(docID);
if (pointer == 0)
The cast to int will break if the (long) result of docToOffset.get is too large, and is unnecessary in the first place since bytes.fill takes a long as its second parameter.
Proposed fix:
final long pointer = docToOffset.get(docID);
if (pointer == 0) { term.length = 0; }
else
{ bytes.fill(term, pointer); }