c# - FieldCache with frequently updating index -


Hi - I have the Lucene Index which is often updated with new records, I have 5,000,000 in my index There are records and I have to caching one of my numerical fields using FieldCache but after updating the index it takes time to reload field cache (cache due to IC reload documentation) That docaid is not reliable) then Can not how only reduce this overhead adding FieldCache the DocIDs Newly added he, because of the potential mine barrier application.

  index reader = index reader. Open (disdair); Int [] dateArr = FieldCache_Fields.DEFAULT.GetInts (Reader, "Newsletter"); // This line takes 4 seconds to load the array date. Err = Fielder_field. DEFAULT.GetInts (Reader, "Newsletter"); // We hope this line takes 0 seconds / here we add some documents to the index and we change the reader = reader The index must be reloaded to display the repon (); DateArr = FieldCache_Fields.DEFAULT.GetInts (Reader, "Newsletter"); // It takes 4 seconds to load this array again   

I need a mechanism that minimizes this time by adding documents added to the index in our array. Display but it still loads all the documents that we already have and I think if we find a way to add new additions to the array, then reload them all. Do not require

Field cache index uses weak references to readers as a key for your cache Does. (By calling IndexReader.GetCacheKey , which has been non-obsolete.) One of the standard calls with IndexReader.Open to FSDirectory The readers will use the pool, one for each segment.

You should always send the lowest reader to the FieldCache. To get a personal reader, refer to ReaderUtil for some helpful content and is contained in a document. Document IDs do not change within a segment, it means that when it is described as an unexpected / instability, it will change between two index commits. Could have deleted documents, segments were merged, and such actions

A commitment is required to remove the segment from the disk (merged / adapted), which means that new readers do not have a deposit volume reader, and the garbage collection will remove it as soon as possible. That all old readers are closed.

Any time, call FieldCache.PurgeAllCaches () any time. It is for testing, not to use the production.

added 2011-04-03; For example, code using sub-raiders.

  var directory = FS directory. Open (New DirectoryInfo ("Index")); Var reader = IndexReader.Open (directory, read only: true); Var documentID = 1337; // Hold all subreaders var sub-readers = new list & lt; Index Reader & gt; (); ReaderUtil.GatherSubReaders (sub-reader, reader); // Loop through all subdrars While in the subReaderId sub-reader // maximum is greater than document ID, go to next var subReaderId = documentId; Var Saberdar = Sub-Reader. First (Subject> & gt; {(Submoxid ())    

Comments