LockObtainFailedException updating Lucene search index using solr -


I have done a lot of this. Most of these problems have remained around a lock after the JVM crash. This is not my case.

I have an indicator with many readers and writers. I'm trying to update index on a large scale (delete and add - how does it do that Lucene updates). I am using solr embedded server (org.apache.solr.client.solrj.embedded.EmbeddedSolrServer). Other authors are using remote, non-streaming servers (org.apache.solr.client.solrj.impl.CommonsHttpSolrServer).

I start this massive update, it runs fine for some time, then org.apache.lucene.store.LockObtainFailedException: a

Due to closing time received: NativeFSLock @ / ... / Lucene-ff783c5d8800fd9722a95494d07d7e37-write.lock

I solrconfig.xml

  & lt ; WriteLockTimeout & gt; 20000 & lt; / WriteLockTimeout & gt; I have adjusted my lock timeout in & Lt; CommitLockTimeout & gt; 10000 & lt; / CommitLockTimeout & gt;   

I am starting to read the Lucene Code to understand it. Any help so I do not have to do this!

Edit: All of my updates go through the following code (Scala):

  val req = new UpdateRequest req.setAction (AbstractUpdateRequest.ACTION.COMMIT, false , Wrong) req.add (document) Val RSP = req.process (solrServer)   

solrServer is org.apache.solr.client.solrj.impl. An example of commonsHttpSolrServer, org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer, or org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.

Another edit: I stopped using EmbeddedSolar Server and it now works. I have two different processes that update the Solr Search index:

1) Servlet 2) Command line tool

The command line tool was using EmbeddedSolrServer and it eventually collapsed with LockObtainFailedException Will happen . When I started using StreamingUpdate Solver, problems went out.

I am still a little confused that EmbeddedSolar Server will work at all. Can anyone explain this? I thought it would be good to play with the serlet process and the second would be waiting while they would wait.

I assume that you are doing something like this:

  Writer1.writeSomeStuff (); Author2.writeSomeStuff (); // No one writes this   

The reason will not work because the author remains open until you close it. Then writes writer1 and puts it on the lock , even after its writing. (Once an author gets a lock, it is not released until it is destroyed.) Author 2 can not be locked because the author is still holding on to it. , So it throws a lock obTanFile exception .

If you want to use two authors, then you have to do something like this:

  writer1.writeSomeStuff (); Writer1.close (); Writer2.open (); Author2.writeSomeStuff (); Writer2.close ();   

Since you can open an author only at a time, you will get any benefit from using many authors. (It's really dangerous that you open and close them all the time, because you are regularly paying a hot penalty.)

The answer to what I suspect is that : Writers. Use a single author with many threads ( IndexWriter thread is protected). If you are connecting to REST or some other HTTP API via SOLR, then a single solver author should be able to handle multiple requests.

I'm not sure what is the case for your use, but another possible answer is to manage multiple indexes. Especially the capacity of the hot-swap core can be of interest.

Comments