umbraco / Umbraco-CMS

Umbraco is a free and open source .NET content management system helping you deliver delightful digital experiences.
https://umbraco.com
MIT License
4.42k stars 2.67k forks source link

Lucene.Net.Store.AlreadyClosedException: this IndexReader is closed #9386

Closed deekoulos closed 3 years ago

deekoulos commented 3 years ago

Umbraco 8.9.1

Hi!

every now and then we are seeing error logs based on requests on pages, which are using lucene indices like the member index. Its really hard to reproduce these errors, but it seems that there are still some issues with the index.

Here a sample log:

Lucene.Net.Store.AlreadyClosedException: this IndexReader is closed
   at Lucene.Net.Index.IndexReader.EnsureOpen()
   at Lucene.Net.Index.IndexReader.Document(Int32 n)
   at Lucene.Net.Search.IndexSearcher.Doc(Int32 i)
   at Examine.LuceneEngine.LuceneSearchResults.CreateFromDocumentItem(Int32 i)
   at Examine.LuceneEngine.LuceneSearchResults.<Skip>d__22.MoveNext()
   at System.Linq.Enumerable.<TakeIterator>d__25`1.MoveNext()
   at Umbraco.Web.ExamineExtensions.ToPublishedSearchResults(IEnumerable`1 results, IPublishedSnapshot snapshot)
   at ASP._Page_Views_sitemapmember_cshtml.Execute() in C:\home\site\wwwroot\Views\sitemapmember.cshtml:line 39
   at System.Web.WebPages.WebPageBase.ExecutePageHierarchy()
   at System.Web.Mvc.WebViewPage.ExecutePageHierarchy()
   at System.Web.WebPages.WebPageBase.ExecutePageHierarchy(WebPageContext pageContext, TextWriter writer, WebPageRenderingBase startPage)
   at System.Web.Mvc.RazorView.RenderView(ViewContext viewContext, TextWriter writer, Object instance)
   at System.Web.Mvc.BuildManagerCompiledView.Render(ViewContext viewContext, TextWriter writer)
   at Umbraco.Web.Mvc.ProfilingView.Render(ViewContext viewContext, TextWriter writer)
   at System.Web.Mvc.ViewResultBase.ExecuteResult(ControllerContext context)
   at System.Web.Mvc.ControllerActionInvoker.InvokeActionResult(ControllerContext controllerContext, ActionResult actionResult)
   at System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilterRecursive(IList`1 filters, Int32 filterIndex, ResultExecutingContext preContext, ControllerContext controllerContext, ActionResult actionResult)
   at System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilterRecursive(IList`1 filters, Int32 filterIndex, ResultExecutingContext preContext, ControllerContext controllerContext, ActionResult actionResult)
   at System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilterRecursive(IList`1 filters, Int32 filterIndex, ResultExecutingContext preContext, ControllerContext controllerContext, ActionResult actionResult)
   at System.Web.Mvc.ControllerActionInvoker.InvokeActionResultFilterRecursive(IList`1 filters, Int32 filterIndex, ResultExecutingContext preContext, ControllerContext controllerContext, ActionResult actionResult)
   at System.Web.Mvc.ControllerActionInvoker.InvokeActionResultWithFilters(ControllerContext controllerContext, IList`1 filters, ActionResult actionResult)
   at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass3_6.<BeginInvokeAction>b__4()
   at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c__DisplayClass3_1.<BeginInvokeAction>b__1(IAsyncResult asyncResult)
   at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResult`1.CallEndDelegate(IAsyncResult asyncResult)
   at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResultBase`1.End()
   at System.Web.Mvc.Async.AsyncControllerActionInvoker.EndInvokeAction(IAsyncResult asyncResult)
   at System.Web.Mvc.Controller.<>c.<BeginExecuteCore>b__152_1(IAsyncResult asyncResult, ExecuteCoreState innerState)
   at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncVoid`1.CallEndDelegate(IAsyncResult asyncResult)
   at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResultBase`1.End()
   at System.Web.Mvc.Controller.EndExecuteCore(IAsyncResult asyncResult)
   at System.Web.Mvc.Controller.<>c.<BeginExecute>b__151_2(IAsyncResult asyncResult, Controller controller)
   at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncVoid`1.CallEndDelegate(IAsyncResult asyncResult)
   at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResultBase`1.End()
   at System.Web.Mvc.Controller.EndExecute(IAsyncResult asyncResult)
   at System.Web.Mvc.Controller.System.Web.Mvc.Async.IAsyncController.EndExecute(IAsyncResult asyncResult)
   at System.Web.Mvc.MvcHandler.<>c.<BeginProcessRequest>b__20_1(IAsyncResult asyncResult, ProcessRequestState innerState)
   at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncVoid`1.CallEndDelegate(IAsyncResult asyncResult)
   at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResultBase`1.End()
   at System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult)
   at System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result)
   at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
   at System.Web.HttpApplication.ExecuteStepImpl(IExecutionStep step)
   at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

Any ideas?

Best Dee

nul800sebastiaan commented 3 years ago

We'd need to see some code to have any idea what might be causing this. :)

deekoulos commented 3 years ago

sure. Here is one of the affected sites which is a sitemap node:


<?xml version="1.0" encoding="UTF-8" ?>
@inherits UmbracoViewPage
@using System.Linq;

@{
    Layout = null;
    Response.ContentType = "text/xml";
    Response.ContentEncoding = System.Text.Encoding.UTF8;

    int.TryParse(HttpContext.Current.Items["sitemapNumber"]?.ToString(), out int sitemapNumber);
    int take = 1500;
    int skip = sitemapNumber * take;
}

<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemalocation="http://www.google.com/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd" xmlns:image="http://www.google.com/schemas/sitemap-image/1.1">

    @{
        string rootUrl = Model.Root().Url(mode: UrlMode.Absolute);

        if (!ExamineManager.Instance.TryGetIndex(Constants.UmbracoIndexes.MembersIndexName, out IIndex index) || !(index is Umbraco.Examine.IUmbracoIndex))
        {
            throw new InvalidOperationException("The required index MembersIndex was not found");
        }

        var booleanOperation = index.GetSearcher()
        .CreateQuery("member", Examine.Search.BooleanOperation.And)
        .Field("published", "1")
        .OrderBy(new Examine.Search.SortableField("id", Examine.Search.SortType.Int));

        var results = booleanOperation.Execute(0).Skip(skip).Take(take);

        // uncomment to use last modified date
        //var memberData = results.ToPublishedSearchResults(UmbracoContext.PublishedSnapshot).Select(x => new
        //{
        //    (x.Content as Umbraco.Web.PublishedModels.Member).Username,
        //    (x.Content as Umbraco.Web.PublishedModels.Member).LastUpdateTimestamp
        //});

        var memberData = results.ToPublishedSearchResults(UmbracoContext.PublishedSnapshot).Select(x => (x.Content as Umbraco.Web.PublishedModels.Member).Username);

        foreach (var data in memberData)
        {
            <url>
                <loc>@(rootUrl + "anbieter/" + data)</loc>

                @* uncomment to use last modified date *@
                @*<lastmod>@(string.Format("{0:s}+00:00", data.LastUpdateTimestamp != DateTime.MinValue ? data.LastUpdateTimestamp.ToString() : "01.01.2020"))</lastmod>*@
                @*<changefreq>daily</changefreq>
                    <priority>0.5</priority>*@
            </url>
        }
    }
</urlset>
deekoulos commented 3 years ago

Update: I found out that these exceptions occur when bots are crawling the site. I guess the amount of requests just bring the index to the knees, when its containing large number of members.

deekoulos commented 3 years ago

since the status of this issue is still "needs more info" I am running out of ideas what more information I should provide. Additionally to sitemaps, I am facing the exceptions on members directory pages as well like: Werbeagenturen in the depth of pages... It seems to be related to load on the index.

alanwthom commented 3 years ago

Hi @deekoulos @nul800sebastiaan

We've had the same issue crop up on the front-end server of a load balanced site using v8.6.5. There seems to be four info stamps in the logs before the first occurence of this Lucene.Net.Store.AlreadyClosedException error fires:

{"@t":"2020-12-13T18:28:23.1954976Z","@mt":"Stopping ({SignalSource})","SignalSource":"signal","SourceContext":"Umbraco.Core.Runtime.MainDom","ProcessId":7240,"ProcessName":"w3wp","ThreadId":67,"AppDomainId":4,"AppDomainAppId":"LMW3SVC234866623ROOT","MachineName":"RD281878FBF72F","Log4NetLevel":"INFO "}
{"@t":"2020-12-13T18:28:23.2865124Z","@mt":"Released from MainDom","SourceContext":"Umbraco.Web.PublishedCache.NuCache.PublishedSnapshotService","ProcessId":7240,"ProcessName":"w3wp","ThreadId":67,"AppDomainId":4,"AppDomainAppId":"LMW3SVC234866623ROOT","MachineName":"RD281878FBF72F","Log4NetLevel":"INFO "}
{"@t":"2020-12-13T18:28:23.3038861Z","@mt":"{StartMessage} [Timing {TimingId}]","StartMessage":"Examine shutting down","TimingId":"7eb60aa","SourceContext":"Umbraco.Web.Search.ExamineComponent","ProcessId":7240,"ProcessName":"w3wp","ThreadId":67,"AppDomainId":4,"AppDomainAppId":"LMW3SVC234866623ROOT","MachineName":"RD281878FBF72F","Log4NetLevel":"INFO "}
{"@t":"2020-12-13T18:28:23.3802636Z","@mt":"{EndMessage} ({Duration}ms) [Timing {TimingId}]","EndMessage":"Completed.","Duration":87,"TimingId":"7eb60aa","SourceContext":"Umbraco.Web.Search.ExamineComponent","ProcessId":7240,"ProcessName":"w3wp","ThreadId":67,"AppDomainId":4,"AppDomainAppId":"LMW3SVC234866623ROOT","MachineName":"RD281878FBF72F","Log4NetLevel":"INFO "}
{"@t":"2020-12-13T18:29:29.3041304Z","@mt":"An unhandled exception occurred","@l":"Error","@x":"Lucene.Net.Store.AlreadyClosedException: this IndexReader is closed\r\n
at Lucene.Net.Index.IndexReader.EnsureOpen() in d:\\Lucene.Net\\FullRepo\\trunk\\src\\core\\Index\\IndexReader.cs:line 204\r\n
at Lucene.Net.Index.DirectoryReader.GetFieldNames(FieldOption fieldNames) in d:\\Lucene.Net\\FullRepo\\trunk\\src\\core\\Index\\DirectoryReader.cs:line 1055\r\n
at Examine.LuceneEngine.Providers.LuceneSearcher.GetAllIndexedFields() in C:\\projects\\examine-qvx04\\src\\Examine\\LuceneEngine\\Providers\\LuceneSearcher.cs:line 101\r\n
at Examine.LuceneEngine.Providers.BaseLuceneSearcher.CreateQuery(String category, BooleanOperation defaultOperation, Analyzer luceneAnalyzer, LuceneSearchOptions searchOptions) in C:\\projects\\examine-qvx04\\src\\Examine\\LuceneEngine\\Providers\\BaseLuceneSearcher.cs:line 64\r\n
at Examine.LuceneEngine.Providers.BaseLuceneSearcher.CreateQuery(String category, BooleanOperation defaultOperation) in C:\\projects\\examine-qvx04\\src\\Examine\\LuceneEngine\\Providers\\BaseLuceneSearcher.cs:line 49\r\n

I'm unsure if perhaps the ValidateSeacher() method in Examine might be returning a true result when in fact the reader is actually closed? My knowledge of Examine and the locking/unlocking processes arent't great, but hopefully these errors provide a little bit more context. Please let me know if I can provide anymore info.

Thanks

Shazwazza commented 3 years ago

The only time a store will be closed is when Examine Manager is disposed. This is entirely controlled by Umbraco's MainDom. When running on azure there is a special SqlMainDomLock that needs to be added. Are you running on Azure? Do you have these settings? https://our.umbraco.com/documentation/Getting-Started/Setup/Server-Setup/azure-web-apps#recommended-configuration

deekoulos commented 3 years ago

Hey @Shazwazza , in my case its running in Umbraco Cloud env, so it should be Azure. I just added these settings to web.config. Let's see if the errors still occur

alanwthom commented 3 years ago

Hi @Shazwazza

Yes, this is running in Azure. We have two web apps, one for the admin and one for the front-end. The admin server has the following settings:

and the front-end server has the following settings:

Shazwazza commented 3 years ago

@deekoulos

On cloud you shouldn't worry about those settings then.

I guess the amount of requests just bring the index to the knees, when its containing large number of members.

The amount of requests to lucene should not affect bring an index down

Lucene.Net.Store.AlreadyClosedException is caused when the index reader is closed but that only happens when examine shuts down.

Only thing I can think of is that MainDom is shutting down even though your website is not actually shutting down but I have only seen that on Azure websites, not Umbraco Cloud since MainDom is controlled differently there.

You could try adding this to your serilog.config file and then monitoring when MainDom shuts down. This should always coincide with an actual app domain shutdown/restart. If you are seeing MainDom shutdown but the appdomain not actually restarting then this is the problem.

<add key="serilog:minimum-level:override:Umbraco.Core.Runtime.MainDom" value="Debug" />
<add key="serilog:minimum-level:override:Umbraco.Core.Runtime.MainDomSemaphoreLock" value="Debug" />
<add key="serilog:minimum-level:override:Umbraco.Core.Runtime.SqlMainDomLock" value="Debug" />

@alanwthom

Those settings look good, thank you :) You can also look at implementing those logging settings above to monitor MainDom shutdown vs appdomain shutdown. There's a new 8.6.x version coming out soon with this PR fix in it: https://github.com/umbraco/Umbraco-CMS/pull/9543

This will cause MainDom to shutdown if your SQL Azure connection is severed entirely. We've seen this in some installs which is strange and shouldn't happen, but alas I guess it does.

deekoulos commented 3 years ago

@deekoulos

The amount of requests to lucene should not affect bring an index down

Lucene.Net.Store.AlreadyClosedException is caused when the index reader is closed but that only happens when examine shuts down.

I understand, but these errors only occur when bots are exploring the site and please consider that we have a large amount of members in the index. Once I was using a SEO tool and ran explicitely SEO Scan over the site, which was running 5 SEO crawl bots at once. It raised these exceptions many times. Then I reduced the amount of site crawlers of the SEO tool from 5 to 1, and no errors were logged.

So there must be a relation between load / simultaneously requests on index served pages and the examine index behaviour.

Shazwazza commented 3 years ago

@deekoulos How large is your member index? And can you replicate this behavior locally while running load tests or whatever tests to emulate what a bot is doing on your site? Have you benchmarked this sitemap script to see if it's executing fast or if there's issues with it? 5 SEO crawl bots doesn't seem like a lot of traffic to the site to bring anything down. I would suggest running a load tester and benchmark locally on these pages to see what might be going on.

deekoulos commented 3 years ago

@Shazwazza around 20k members. This behaviour is not related only to sitemaps. Its related to all index served pages like the category pages:

The website including these category pages is performing like a rocket actually. Here is for instance the Performance Score by Google's PageSpeed Insights for a category page, which requests the index, filters, sorts and displays the relevant member data:

image

A better site performance is almost impossible.

Yesterday evening, a bing bot caused a Lock obtain timed out exception, and after that the member index was locked. I didn't even have access to it in BO > Examine Management. Exceptions were raised on visiting the Examine Management page. In the end I had to refresh the application cache by editing the web.config to get things work again.

Here some details about the exception yesterday:

User Agent: (Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)) Type: Lucene.Net.Store.LockObtainFailedException Exception Message: Lock obtain timed out: SimpleFSLock@C:\home\site\wwwroot\App_Data\TEMP\ExamineIndexes\Members\write.lock

I need to find some time for appropriate load testing, but in the end, they would also just cause such error logs, or do you expect any more details getting from load tests?

Please consider that every category contains many pages (pagination), it allows different sortings and filters. I don't know how many requests bots are able to make per seconds, but due to the logs, they are requesting a lot of categories and the the underlying pages (page=1, page=2, ...) including different variations of sortings and filters. At some points, the index is not responding properly any more.

Shazwazza commented 3 years ago

This behaviour is not related only to sitemaps. Its related to all index served pages like the category pages:

Ok cool. The main question is if you can replicate this locally in any way. Without being able to do that it will be very difficult to diagnose. Please also ensure you have those log settings applied and monitor if your appdomain or maindom is shutting down when these errors occur. Presumably you can run a load testing script that would emulate what a bot is doing locally with a cloned version of your site.

Are you doing anything whatsoever that is customizing Examine?

Lucene.Net.Store.AlreadyClosedException is caused when the index reader is closed but that only happens when examine shuts down.

I may have confused myself with a different issue - since this has to do with an IndexReader, it could be an issue with Examine's IndexReader tracking: LuceneSearcher.ValidateSearcher, OpenReaderTracker, and ReaderReopener (when not in NRT mode).

Once this can be replicated then it can be diagnosed. I've never seen this problem with other sites and Examine is used extensively on tons of high traffic sites so will be interesting to find the root cause.

This issue should really be moved to Examine repo though, it's not really Umbraco related.

nul800sebastiaan commented 3 years ago

I guess it might be interesting to see what Feedbax.Core.Helpers.ModelHelper.CreateCompanyListModel does as well.

deekoulos commented 3 years ago

@Shazwazza

The main question is if you can replicate this locally in any way. Without being able to do that it will be very difficult to diagnose.

The main issue is that there is no member sync between environments. In order to simulate this locally, I need all 20k members from live environment with proper field assignments. For instance, every member has multiple field values, which are pointing to content nodes. Content nodes in live have different Ids than in local. So i need to somehow import all members locally with real data and this could be tricky. Maybe I should run the load tests directly on live env. And track then. I think I don't really need to implement or run a load test script. I would just run the SEO crawl bots over it and track the logs.

Please also ensure you have those log settings applied and monitor if your appdomain or maindom is shutting down when these errors occur.

Done!

Are you doing anything whatsoever that is customizing Examine?

The only examine customization here is: I am extending the standard fields of the member index and index all available fields (70-80 fields I guess) for the member type, because we need all member fields to be indexed and searchable:

Once this can be replicated then it can be diagnosed. I've never seen this problem with other sites and Examine is used extensively on tons of high traffic sites so will be interesting to find the root cause.

Maybe other websites do not have so many members and / or do not index all 80 fields per member.

This issue should really be moved to Examine repo though, it's not really Umbraco related.

I am fine with moving this to examine, please let me know under what url I can accesss the issue.

Shazwazza commented 3 years ago

@deekoulos I would recommend adding a ToList() to resolve this IEnumerable result

model.Members = members.Select(x => x.Content as Umbraco.Web.PublishedModels.Member);

so like:

model.Members = members.Select(x => x.Content as Umbraco.Web.PublishedModels.Member).ToList();

The reason is because the result of the search is IEnumerable with a custom iterator both in Umbraco and in Examine. If the result of model.Members is re-iterated it could mean re-issuing the search. It also means within Umbraco that it's trying to dynamically change the variant context while re-iterating search results. If you are then possibly caching the result of this method, it's only caching the enumerator and not the result which could end up re-issuing the search (possibly with a stale reader).

There's a possibility that could be related to this issue.

deekoulos commented 3 years ago

If the result of model.Members is re-iterated it could mean re-issuing the search.

@Shazwazza

Thx for the hint! Could you please elaborate, in which cases the results would be re-iterated? Every sorting or paging leads to a new page request. So I am not sure, how the results of the same page request would be re-iterated.

Shazwazza commented 3 years ago

Could you please elaborate, in which cases the results would be re-iterated?

Yes, your model.Members is an IEnumerable property which is set based on other IEnumerables, first contentQuery.Search and then a linq filter of members.Select(x => x.Content as Umbraco.Web.PublishedModels.Member);.

So now anytime you might do:

Resharper and other tools will warn you about this.

There's a chance that you aren't going to execute that enumeration more than once and if that's the case then that's awesome and there is no need to put the entire collection into memory at once in a list or array. This all depends on how you are using model.Members. Ideally you only iterate it once but if you are doing it more than once you should resolve it to an in memory list.

deekoulos commented 3 years ago

@Shazwazza Hey Mate, thanks for your clarification. I only had one foreach in the view, iterating the members, but just to be sure, that this is not the cause of the index issue, I added to ToList method as you suggested and will have an eye on the logs to see if these errors occur again.

I wish you and your family a Merry Christmas!

deekoulos commented 3 years ago

the errors still occur, just had another bot crawling which raised the same errors again, although I switched to .ToList(). So this is not the cause of the issue

Shazwazza commented 3 years ago

@deekoulos thanks for checking. I have found one issue that may possibly be the underlying cause. The issue is logged here https://github.com/Shazwazza/Examine/issues/203

I'll push a new Examine release very soon. For now if you want to try this attached release it would be most helpful. Just remove the .zip extension and you can use it as a normal nupkg package. These are the changes including in this release https://github.com/Shazwazza/Examine/milestone/25?closed=1

Examine.1.1.0.nupkg.zip

deekoulos commented 3 years ago

@Shazwazza thanks for the package. Just installed and deployed it. Hope it resolves the issue. Index was again locked out and the whole platform was not responding anymore because of that. I had to restart the live env.

Shazwazza commented 3 years ago

@deekoulos any news with the latest package?

deekoulos commented 3 years ago

Hey Shannon, thanks for asking. No issues recorded yet!

Shazwazza commented 3 years ago

🤞 🤞 Let me know how things go, that seems positive right? Seems you were having this issue very often before

deekoulos commented 3 years ago

due to the logs, bots are recrawling the website every 2-4 weeks. So if these errors don't raise again in the next 3 weeks, the issue shoud be resolved. I will keep you updated for sure!!

deekoulos commented 3 years ago

Hey @Shazwazza ,

a new index exception just raised an hour ago, caused by a bot. This time the exception is different:

User Agent: (Mozilla/5.0 (compatible; Adsbot/3.1)) Type: Lucene.Net.Index.CorruptIndexException Message: did not read all bytes from file "_eii.fnm": read 659 vs size 9958

timeverts commented 3 years ago

@Shazwazza,

I get the 'Index reader is closed' error happening regularly on an Azure hosted Umbraco 8.9.1 site too using Examine 1.0.6. Looking at the logs, it appears to happen when the app has been shut down (i guess by Azure) and the index is accessed thereafter (e.g. by a user triggering a search on the search page of the site).

So near shutdown, the logs show the following errors:

{"@t":"2021-01-11T04:05:42.8097902Z","@mt":"App is shutting down so index batch operation is ignored","@l":"Error","SourceContext":"Umbraco.Examine.UmbracoContentIndex","ProcessId":6924,"ProcessName":"w3wp","ThreadId":121,"AppDomainId":2,"AppDomainAppId":"LMW3SVC728100230ROOT","MachineName":"RD501AC51A43B4","Log4NetLevel":"ERROR"}
{"@t":"2021-01-11T04:05:42.8097902Z","@mt":"{LogPrefix} Task has failed","@l":"Error","@x":"System.ObjectDisposedException: The CancellationTokenSource has been disposed.\r\n   at System.Threading.CancellationTokenSource.ThrowObjectDisposedException()\r\n   at Examine.LuceneEngine.Providers.LuceneIndex.SafelyProcessQueueItems(Action`1 onComplete) in C:\\projects\\examine-qvx04\\src\\Examine\\LuceneEngine\\Providers\\LuceneIndex.cs:line 783\r\n   at Examine.LuceneEngine.Providers.LuceneIndex.PerformIndexItems(IEnumerable`1 values, Action`1 onComplete) in C:\\projects\\examine-qvx04\\src\\Examine\\LuceneEngine\\Providers\\LuceneIndex.cs:line 302\r\n   at Umbraco.Examine.UmbracoContentIndex.PerformIndexItems(IEnumerable`1 values, Action`1 onComplete) in D:\\a\\1\\s\\src\\Umbraco.Examine\\UmbracoContentIndex.cs:line 100\r\n   at Examine.Providers.BaseIndexProvider.IndexItems(IEnumerable`1 values) in C:\\projects\\examine-qvx04\\src\\Examine\\Providers\\BaseIndexProvider.cs:line 76\r\n   at Umbraco.Web.Search.ExamineComponent.DeferedReIndexForMedia.<>c__DisplayClass5_0.<Execute>b__0() in D:\\a\\1\\s\\src\\Umbraco.Web\\Search\\ExamineComponent.cs:line 680\r\n   at Umbraco.Web.Scheduling.SimpleTask.Run() in D:\\a\\1\\s\\src\\Umbraco.Web\\Scheduling\\SimpleTask.cs:line 21\r\n   at Umbraco.Web.Scheduling.BackgroundTaskRunner`1.<RunAsync>d__40.MoveNext() in D:\\a\\1\\s\\src\\Umbraco.Web\\Scheduling\\BackgroundTaskRunner.cs:line 562","LogPrefix":"[Umbraco.Web.Scheduling.IBackgroundTask] ","SourceContext":"Umbraco.Web.Scheduling.BackgroundTaskRunner","ProcessId":6924,"ProcessName":"w3wp","ThreadId":121,"AppDomainId":2,"AppDomainAppId":"LMW3SVC728100230ROOT","MachineName":"RD501AC51A43B4","Log4NetLevel":"ERROR"}

And then when the index is next accessed the following error is logged:

{"@t":"2021-01-11T06:40:59.2819386Z","@mt":"Unhandled controller exception occurred for request '{RequestUrl}'","@l":"Error","@x":"System.AggregateException: Exceptions were thrown by listed actions. ---> Lucene.Net.Store.AlreadyClosedException: this IndexReader is closed\r\n   at Lucene.Net.Index.IndexReader.EnsureOpen() in d:\\Lucene.Net\\FullRepo\\trunk\\src\\core\\Index\\IndexReader.cs:line 204\r\n   at Lucene.Net.Index.DirectoryReader.GetFieldNames(FieldOption fieldNames) in d:\\Lucene.Net\\FullRepo\\trunk\\src\\core\\Index\\DirectoryReader.cs:line 1055\r\n   at Examine.LuceneEngine.Providers.LuceneSearcher.GetAllIndexedFields() in C:\\projects\\examine-qvx04\\src\\Examine\\LuceneEngine\\Providers\\LuceneSearcher.cs:line 101\r\n   at Examine.LuceneEngine.Providers.BaseLuceneSearcher.CreateQuery(String category, BooleanOperation defaultOperation, Analyzer luceneAnalyzer, LuceneSearchOptions searchOptions) in C:\\projects\\examine-qvx04\\src\\Examine\\LuceneEngine\\Providers\\BaseLuceneSearcher.cs:line 64\r\n   at Examine.LuceneEngine.Providers.BaseLuceneSearcher.CreateQuery(String category, BooleanOperation defaultOperation) in C:\\projects\\examine-qvx04\\src\\Examine\\LuceneEngine\\Providers\\BaseLuceneSearcher.cs:line 49\r\n   at Umbraco.Examine.UmbracoContentIndex.PerformDeleteFromIndex(IEnumerable`1 itemIds, Action`1 onComplete) in D:\\a\\1\\s\\src\\Umbraco.Examine\\UmbracoContentIndex.cs:line 131\r\n   at Examine.Providers.BaseIndexProvider.DeleteFromIndex(IEnumerable`1 itemIds) in C:\\projects\\examine-qvx04\\src\\Examine\\Providers\\BaseIndexProvider.cs:line 82\r\n   at Umbraco.Web.Search.ExamineComponent.DeferedDeleteIndex.Execute(ExamineComponent examineComponent, Int32 id, Boolean keepIfUnpublished) in D:\\a\\1\\s\\src\\Umbraco.Web\\Search\\ExamineComponent.cs:line 747\r\n   at Umbraco.Web.Search.ExamineComponent.DeferedDeleteIndex.Execute() in D:\\a\\1\\s\\src\\Umbraco.Web\\Search\\ExamineComponent.cs:line 741\r\n   at Umbraco.Web.Search.ExamineComponent.DeferedActions.Execute() in D:\\a\\1\\s\\src\\Umbraco.Web\\Search\\ExamineComponent.cs:line 592\r\n   at Umbraco.Web.Search.ExamineComponent.DeferedActions.<>c.<Get>b__1_1(Boolean completed, DeferedActions actions) in D:\\a\\1\\s\\src\\Umbraco.Web\\Search\\ExamineComponent.cs:line 581\r\n   at Umbraco.Core.Scoping.ScopeContext.EnlistedObject`1.Execute(Boolean completed) in D:\\a\\1\\s\\src\\Umbraco.Core\\Scoping\\ScopeContext.cs:line 69\r\n   at Umbraco.Core.Scoping.ScopeContext.ScopeExit(Boolean completed) in D:\\a\\1\\s\\src\\Umbraco.Core\\Scoping\\ScopeContext.cs:line 26\r\n   --- End of inner exception stack trace ---\r\n   at Umbraco.Core.Scoping.ScopeContext.ScopeExit(Boolean completed) in D:\\a\\1\\s\\src\\Umbraco.Core\\Scoping\\ScopeContext.cs:line 38\r\n   at Umbraco.Core.Scoping.Scope.<>c__DisplayClass71_0.<RobustExit>b__2() in D:\\a\\1\\s\\src\\Umbraco.Core\\Scoping\\Scope.cs:line 437\r\n   at Umbraco.Core.Scoping.Scope.TryFinally(Int32 index, Action[] actions) in D:\\a\\1\\s\\src\\Umbraco.Core\\Scoping\\Scope.cs:line 472\r\n   at Umbraco.Core.Scoping.Scope.TryFinally(Int32 index, Action[] actions) in D:\\a\\1\\s\\src\\Umbraco.Core\\Scoping\\Scope.cs:line 476\r\n   at Umbraco.Core.Scoping.Scope.TryFinally(Int32 index, Action[] actions) in D:\\a\\1\\s\\src\\Umbraco.Core\\Scoping\\Scope.cs:line 476\r\n   at Umbraco.Core.Scoping.Scope.RobustExit(Boolean completed, Boolean onException) in D:\\a\\1\\s\\src\\Umbraco.Core\\Scoping\\Scope.cs:line 420\r\n   at Umbraco.Core.Scoping.Scope.DisposeLastScope() in D:\\a\\1\\s\\src\\Umbraco.Core\\Scoping\\Scope.cs:line 400\r\n   at Umbraco.Core.Scoping.Scope.Dispose() in D:\\a\\1\\s\\src\\Umbraco.Core\\Scoping\\Scope.cs:line 363\r\n   at Umbraco.Core.Services.Implement.ContentService.Save(IContent content, Int32 userId, Boolean raiseEvents) in D:\\a\\1\\s\\src\\Umbraco.Core\\Services\\Implement\\ContentService.cs:line 815\r\n   at Umbraco.Web.Editors.ContentController.<>c__DisplayClass31_0.<PostSave>b__0(IContent content) in D:\\a\\1\\s\\src\\Umbraco.Web\\Editors\\ContentController.cs:line 625\r\n   at Umbraco.Web.Editors.ContentController.SaveAndNotify(ContentItemSave contentItem, Func`2 saveMethod, Int32 variantCount, Dictionary`2 notifications, SimpleNotificationModel globalNotifications, String invariantSavedLocalizationKey, String variantSavedLocalizationKey, String cultureForInvariantErrors, Boolean& wasCancelled) in D:\\a\\1\\s\\src\\Umbraco.Web\\Editors\\ContentController.cs:line 924\r\n   at Umbraco.Web.Editors.ContentController.PostSaveInternal(ContentItemSave contentItem, Func`2 saveMethod, Func`2 mapToDisplay) in D:\\a\\1\\s\\src\\Umbraco.Web\\Editors\\ContentController.cs:line 712\r\n   at Umbraco.Web.Editors.ContentController.PostSave(ContentItemSave contentItem) in D:\\a\\1\\s\\src\\Umbraco.Web\\Editors\\ContentController.cs:line 623\r\n   at lambda_method(Closure , Object , Object[] )\r\n   at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ActionExecutor.<>c__DisplayClass6_2.<GetExecutor>b__2(Object instance, Object[] methodParameters)\r\n   at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ActionExecutor.Execute(Object instance, Object[] arguments)\r\n   at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ExecuteAsync(HttpControllerContext controllerContext, IDictionary`2 arguments, CancellationToken cancellationToken)\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<ExecuteActionFilterAsyncCore>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<ExecuteActionFilterAsyncCore>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<ExecuteActionFilterAsyncCore>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<ExecuteActionFilterAsyncCore>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<ExecuteActionFilterAsyncCore>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<ExecuteActionFilterAsyncCore>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<ExecuteActionFilterAsyncCore>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<ExecuteActionFilterAsyncCore>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<CallOnActionExecutedAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.ActionFilterAttribute.<ExecuteActionFilterAsyncCore>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.AuthorizationFilterAttribute.<ExecuteAuthorizationFilterAsyncCore>d__3.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.AuthorizationFilterAttribute.<ExecuteAuthorizationFilterAsyncCore>d__3.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.AuthorizationFilterAttribute.<ExecuteAuthorizationFilterAsyncCore>d__3.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Filters.AuthorizationFilterAttribute.<ExecuteAuthorizationFilterAsyncCore>d__3.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ExceptionFilterResult.<ExecuteAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Web.Http.Controllers.ExceptionFilterResult.<ExecuteAsync>d__6.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()\r\n---> (Inner Exception #0) Lucene.Net.Store.AlreadyClosedException: this IndexReader is closed\r\n   at Lucene.Net.Index.IndexReader.EnsureOpen() in d:\\Lucene.Net\\FullRepo\\trunk\\src\\core\\Index\\IndexReader.cs:line 204\r\n   at Lucene.Net.Index.DirectoryReader.GetFieldNames(FieldOption fieldNames) in d:\\Lucene.Net\\FullRepo\\trunk\\src\\core\\Index\\DirectoryReader.cs:line 1055\r\n   at Examine.LuceneEngine.Providers.LuceneSearcher.GetAllIndexedFields() in C:\\projects\\examine-qvx04\\src\\Examine\\LuceneEngine\\Providers\\LuceneSearcher.cs:line 101\r\n   at Examine.LuceneEngine.Providers.BaseLuceneSearcher.CreateQuery(String category, BooleanOperation defaultOperation, Analyzer luceneAnalyzer, LuceneSearchOptions searchOptions) in C:\\projects\\examine-qvx04\\src\\Examine\\LuceneEngine\\Providers\\BaseLuceneSearcher.cs:line 64\r\n   at Examine.LuceneEngine.Providers.BaseLuceneSearcher.CreateQuery(String category, BooleanOperation defaultOperation) in C:\\projects\\examine-qvx04\\src\\Examine\\LuceneEngine\\Providers\\BaseLuceneSearcher.cs:line 49\r\n   at Umbraco.Examine.UmbracoContentIndex.PerformDeleteFromIndex(IEnumerable`1 itemIds, Action`1 onComplete) in D:\\a\\1\\s\\src\\Umbraco.Examine\\UmbracoContentIndex.cs:line 131\r\n   at Examine.Providers.BaseIndexProvider.DeleteFromIndex(IEnumerable`1 itemIds) in C:\\projects\\examine-qvx04\\src\\Examine\\Providers\\BaseIndexProvider.cs:line 82\r\n   at Umbraco.Web.Search.ExamineComponent.DeferedDeleteIndex.Execute(ExamineComponent examineComponent, Int32 id, Boolean keepIfUnpublished) in D:\\a\\1\\s\\src\\Umbraco.Web\\Search\\ExamineComponent.cs:line 747\r\n   at Umbraco.Web.Search.ExamineComponent.DeferedDeleteIndex.Execute() in D:\\a\\1\\s\\src\\Umbraco.Web\\Search\\ExamineComponent.cs:line 741\r\n   at Umbraco.Web.Search.ExamineComponent.DeferedActions.Execute() in D:\\a\\1\\s\\src\\Umbraco.Web\\Search\\ExamineComponent.cs:line 592\r\n   at Umbraco.Web.Search.ExamineComponent.DeferedActions.<>c.<Get>b__1_1(Boolean completed, DeferedActions actions) in D:\\a\\1\\s\\src\\Umbraco.Web\\Search\\ExamineComponent.cs:line 581\r\n   at Umbraco.Core.Scoping.ScopeContext.EnlistedObject`1.Execute(Boolean completed) in D:\\a\\1\\s\\src\\Umbraco.Core\\Scoping\\ScopeContext.cs:line 69\r\n   at Umbraco.Core.Scoping.ScopeContext.ScopeExit(Boolean completed) in D:\\a\\1\\s\\src\\Umbraco.Core\\Scoping\\ScopeContext.cs:line 26<---\r\n","RequestUrl":"https://alcoholthinkagain.com.au/umbraco/backoffice/UmbracoApi/Content/PostSave","SourceContext":"Umbraco.Web.Editors.ContentController","ProcessId":6924,"ProcessName":"w3wp","ThreadId":35,"AppDomainId":2,"AppDomainAppId":"LMW3SVC728100230ROOT","MachineName":"RD501AC51A43B4","Log4NetLevel":"ERROR","HttpRequestNumber":9515,"HttpRequestId":"b0c05965-7d69-44ea-b479-1c074f80ecac"}

My hosting setup is the I am using slot swapping between a staging and production environment, so essentially both slots are sharing the same database. On a deployment, the staging slot/site is used to verify the deployed build is executing correctly and if so, it is swapped with the production slot to make the changes live.

I realise that according to #8006, slot swapping is essentially load balancing. Based on that, should I be configuring my staging slot as a replica/frontend and the production slot as a master/admin, with configuration similar to what @alanwthom described for his setup? Or should slot swapping applications be configured differently from a load balancing scenario, since the staging slot potentially has to have access to Umbraco admin (e.g. on deployment) as well before swapping with production?

Guess I'm looking for some guidance here on how to resolve these Examine issues and whether it's a fault of my configuration or the Examine library version. Look forward to your advice on this.

Shazwazza commented 3 years ago

@deekoulos weird, also never seen this error. You seem to have a knack for somehow eliciting these types of things :P All I can do is investigate that in Lucene to see how that may occur. In the meantime keep me posted for what happens on the next bot crawl. Did you rebuild your indexes after this one?

@timeverts There is no perfect way to shutdown Lucene while there are queued index operations, some of those errors are expected and should be changed to Info instead of error like App is shutting down so index batch operation is ignored. This is a limitation of Lucene running in a web application and we cannot hold up the web app from shutting down since that results in much worse problems. It does try to finish some indexing before shutting but if the queue is large it cannot. This is a primary reason I created ExamineX. Your 2nd error there though Task has failed","@l":"Error","@x":"System.ObjectDisposedException should be dealt with as part of https://github.com/Shazwazza/Examine/issues/166 which is in Examine 1.0.4 so would need to investigate how that could still occur. Your 3rd exception I suppose will need to be handled and that is a timing issue with MainDom shutting down, which tries to unwind lucene but at the same time an action in the back office is trying to do lucene things but it's no longer there. The 'fix' for that is really to avoid the exception being thrown but there's really not much that can be done with regards to actually processing the requested lucene operation. What is not clear from your description when you say And then when the index is next accessed the following error is logged ... what is the outcome of this? Is your app no longer usable or does everything work again fine after that when requests are running on the new appdomain? (need more info on this one)

I would suggest you also try the latest (beta) release as mentioned here. Though it wont resolve that 3rd issue it will stabilize other potential ones: https://github.com/umbraco/Umbraco-CMS/issues/9386#issuecomment-753745328

RE: slot swapping - yes this is load balancing with write + write which as you've noted is not really supported, but in theory it should be ok in v8 because of the distributed locks that are in place.

Based on that, should I be configuring my staging slot as a replica/frontend and the production slot as a master/admin, with configuration similar to what @alanwthom described for his setup

If you can then yes, but you will have to 'flip' those settings when swapping. I've never tried this before.

Shazwazza commented 3 years ago

@timeverts I've just fixed one of those issues you reported, the bug report is here https://github.com/Shazwazza/Examine/issues/204

This is another beta of Examine 1.1.0 you can try Examine.1.1.0.nupkg.zip

timeverts commented 3 years ago

@Shazwazza , thanks so much for your feedback.

I'm very sorry. I read your comments and then realised that I made a mistake when entering the current Examine version on the site. It's actually using Examine 1.0.2 (not 1.0.6)! That could explain why this error has persisted on our site.

What I will do is update the Examine library to the latest beta supplied and see if it gets rid of these issues. Then I'll report back.

Regarding And then when the index is next accessed the following error is logged, what I mean by that is that when someone attempts to do a site search (which queries the index) and the results of that search are displayed on the search page, that exception is shown in place of the search results. The rest of the application is working fine. The quickest way I've found to resolve that issue is to restart the app in Azure.

RE: slot swapping configuration - configuration sounds a little complicated. Maybe I'll try deploy the latest beta first and if I am still having issues I might try go down that route.

deekoulos commented 3 years ago

@Shazwazza Hey Shannon,

no I forgot to rebuild the index and that was a mistake. Changes to members were not stored. This is really awful for the user experience.

I hope we finally workout a solution here.

In the logs I could see that right after the exception from my last post there were a couple of follow up exceptions:

Number 1:

Lucene.Net.Index.CorruptIndexException: did not read all bytes from file "_eii.fnm": read 659 vs size 9958
   at Lucene.Net.Index.FieldInfos.Read(IndexInput input, String fileName)
   at Lucene.Net.Index.FieldInfos..ctor(Directory d, String name)
   at Lucene.Net.Index.SegmentReader.CoreReaders..ctor(SegmentReader origInstance, Directory dir, SegmentInfo si, Int32 readBufferSize, Int32 termsIndexDivisor)
   at Lucene.Net.Index.SegmentReader.Get(Boolean readOnly, Directory dir, SegmentInfo si, Int32 readBufferSize, Boolean doOpenStores, Int32 termInfosIndexDivisor)
   at Lucene.Net.Index.IndexWriter.ReaderPool.Get(SegmentInfo info, Boolean doOpenStores, Int32 readBufferSize, Int32 termsIndexDivisor)
   at Lucene.Net.Index.IndexWriter.ReaderPool.Get(SegmentInfo info, Boolean doOpenStores)
   at Lucene.Net.Index.DocumentsWriter.ApplyDeletes(SegmentInfos infos)
   at Lucene.Net.Index.IndexWriter.ApplyDeletes()
   at Lucene.Net.Index.IndexWriter.DoFlushInternal(Boolean flushDocStores, Boolean flushDeletes)
   at Lucene.Net.Index.IndexWriter.DoFlush(Boolean flushDocStores, Boolean flushDeletes)
   at Lucene.Net.Index.IndexWriter.PrepareCommit(IDictionary`2 commitUserData)
   at Lucene.Net.Index.IndexWriter.Commit(IDictionary`2 commitUserData)
   at Examine.LuceneEngine.Providers.LuceneIndex.IndexCommiter.TimerRelease()

Number 2:

Lucene.Net.Index.MergePolicy+MergeException: Exception of type 'Lucene.Net.Index.MergePolicy+MergeException' was thrown. ---> System.IO.IOException: Cannot overwrite: C:\home\site\wwwroot\App_Data\TEMP\ExamineIndexes\Members\_eik.cfs
   at Lucene.Net.Index.IndexWriter.HandleMergeException(Exception t, OneMerge merge)
   at Lucene.Net.Index.IndexWriter.Merge(OneMerge merge)
   at Lucene.Net.Index.ConcurrentMergeScheduler.DoMerge(OneMerge merge)
   at Lucene.Net.Index.ConcurrentMergeScheduler.MergeThread.Run()
   --- End of inner exception stack trace ---
   at Lucene.Net.Index.ConcurrentMergeScheduler.HandleMergeException(Exception exc)
   at Examine.LuceneEngine.Providers.ErrorLoggingConcurrentMergeScheduler.HandleMergeException(Exception exc)

Number 3:

System.IO.IOException: Cannot overwrite: C:\home\site\wwwroot\App_Data\TEMP\ExamineIndexes\Members\_eil.tvx
   at Lucene.Net.Store.FSDirectory.InitOutput(String name)
   at Lucene.Net.Store.SimpleFSDirectory.CreateOutput(String name)
   at Lucene.Net.Index.TermVectorsTermsWriter.InitTermVectorsWriter()
   at Lucene.Net.Index.TermVectorsTermsWriter.FinishDocument(PerDoc perDoc)
   at Lucene.Net.Index.TermVectorsTermsWriter.PerDoc.Finish()
   at Lucene.Net.Index.DocFieldProcessorPerThread.PerDoc.Finish()
   at Lucene.Net.Index.DocumentsWriter.WaitQueue.WriteDocument(DocWriter doc)
   at Lucene.Net.Index.DocumentsWriter.WaitQueue.Add(DocWriter doc)
   at Lucene.Net.Index.DocumentsWriter.FinishDocument(DocumentsWriterThreadState perThread, DocWriter docWriter)
   at Lucene.Net.Index.DocumentsWriter.UpdateDocument(Document doc, Analyzer analyzer, Term delTerm)
   at Lucene.Net.Index.IndexWriter.UpdateDocument(Term term, Document doc, Analyzer analyzer)
   at Lucene.Net.Index.IndexWriter.UpdateDocument(Term term, Document doc)
   at Examine.LuceneEngine.Providers.LuceneIndex.AddDocument(Document doc, ValueSet valueSet, IndexWriter writer)
   at Umbraco.Examine.UmbracoExamineIndex.AddDocument(Document doc, ValueSet valueSet, IndexWriter writer)
   at Examine.LuceneEngine.Providers.LuceneIndex.ProcessIndexQueueItem(IndexOperation op, IndexWriter writer)
   at Examine.LuceneEngine.Providers.LuceneIndex.ProcessQueueItem(IndexOperation item, IndexWriter writer)
   at Examine.LuceneEngine.Providers.LuceneIndex.ForceProcessQueueItems(Boolean block)

Number 4:

System.IO.IOException: Cannot overwrite: C:\home\site\wwwroot\App_Data\TEMP\ExamineIndexes\Members\_eim.cfs
   at Lucene.Net.Store.FSDirectory.InitOutput(String name)
   at Lucene.Net.Store.SimpleFSDirectory.CreateOutput(String name)
   at Lucene.Net.Index.CompoundFileWriter.Dispose()
   at Lucene.Net.Index.DocumentsWriter.CreateCompoundFile(String segment)
   at Lucene.Net.Index.IndexWriter.DoFlushInternal(Boolean flushDocStores, Boolean flushDeletes)
   at Lucene.Net.Index.IndexWriter.DoFlush(Boolean flushDocStores, Boolean flushDeletes)
   at Lucene.Net.Index.IndexWriter.PrepareCommit(IDictionary`2 commitUserData)
   at Lucene.Net.Index.IndexWriter.Commit(IDictionary`2 commitUserData)
   at Examine.LuceneEngine.Providers.LuceneIndex.IndexCommiter.TimerRelease()

Number 4 was raised 3 times in a row...

Shazwazza commented 3 years ago

@deekoulos the issue looks like it may have something to do with Lucene's merge policies. Again, I've not encountered this before ever, all I can do is try to investigate how this can happen with Lucene. Best I can do at this stage is try to setup some load testing operations in hopes to try to replicate the issue but that will take time. Keep me posted on what happens on the next bot crawl.

Shazwazza commented 3 years ago

@deekoulos also if you can, can you invite me to your Cloud project that this occurs on? shannon at umbraco dot com

deekoulos commented 3 years ago

@Shazwazza you got an invite. Thanks for your support.

I just blocked PetalBot, SemRush Bot and MJ12bot via robot.txt just to reduce the amount of errors and side effects as long as the issue is remaining.

Shazwazza commented 3 years ago

@deekoulos ok, though I'd still be interested to see what happens on the next bot crawl and whether you get same/different/none exceptions.

deekoulos commented 3 years ago

@Shazwazza no worries, there are still other bots, which I wont block, like Bing, Google etc.

Were you able to find anything suspicious through your access to the cloud project?

deekoulos commented 3 years ago

Hey @Shazwazza

now it was Google's turn and we registered 7 exceptions again. We were lucky that this happend on a Saturday morning around 2 AM, otherwise the whole website would have been blocked by the index lock during business hours.

User Agent: (Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)) Type: System.Data.SqlClient.SqlException Exception Message: Lock request time out period exceeded. The statement has been terminated. Stacktrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action1 wrapCloseInAction) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action1 wrapCloseInAction) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString, Boolean isInternal, Boolean forDescribeParameterEncryption, Boolean shouldCacheForAlwaysEncrypted) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, Boolean inRetry, SqlDataReader ds, Boolean describeParameterEncryptionRequest) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource1 completion, Int32 timeout, Task& task, Boolean& usedCache, Boolean asyncWrite, Boolean inRetry) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean& usedCache, Boolean asyncWrite, Boolean inRetry) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at StackExchange.Profiling.Data.ProfiledDbCommand.ExecuteNonQuery() in C:\projects\dotnet\src\MiniProfiler.Shared\Data\ProfiledDbCommand.cs:line 272 at Umbraco.Core.Persistence.FaultHandling.FaultHandlingDbCommand.b__31_0() at Umbraco.Core.Persistence.FaultHandling.FaultHandlingDbCommand.<>c__DisplayClass33_01.b0() at Umbraco.Core.Persistence.FaultHandling.RetryPolicy.ExecuteAction[TResult](Func1 func) at Umbraco.Core.Persistence.FaultHandling.FaultHandlingDbCommand.Execute[T](Func1 f) at Umbraco.Core.Persistence.FaultHandling.FaultHandlingDbCommand.ExecuteNonQuery() at NPoco.Database.ExecuteNonQueryHelper(DbCommand cmd) at NPoco.Database.NPoco.IDatabaseHelpers.ExecuteNonQueryHelper(DbCommand cmd) at NPoco.Database.Execute(String sql, CommandType commandType, Object[] args) at NPoco.Database.Execute(Sql Sql) at NPoco.Database.Execute(String sql, Object[] args) at Umbraco.Core.Persistence.SqlSyntax.SqlServerSyntaxProvider.WriteLock(IDatabase db, TimeSpan timeout, Int32[] lockIds) at Umbraco.Core.Persistence.SqlSyntax.SqlServerSyntaxProvider.WriteLock(IDatabase db, Int32[] lockIds) at Umbraco.Core.Scoping.Scope.WriteLock(Int32[] lockIds) at Umbraco.Core.Services.Implement.MemberService.Save(IMember member, Boolean raiseEvents) at Feedbax.Core.Controllers.RenderMvc.ProfileController.Index(ContentModel model) at lambda_method(Closure , ControllerBase , Object[] ) at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary2 parameters) at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary2 parameters) at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c.b__9_0(IAsyncResult asyncResult, ActionInvocation innerInvokeState) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResult2.CallEndDelegate(IAsyncResult asyncResult) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResultBase1.End() at System.Web.Mvc.Async.AsyncControllerActionInvoker.EndInvokeActionMethod(IAsyncResult asyncResult) at System.Web.Mvc.Async.AsyncControllerActionInvoker.AsyncInvocationWithFilters.<>cDisplayClass11_0.b0() at System.Web.Mvc.Async.AsyncControllerActionInvoker.AsyncInvocationWithFilters.<>cDisplayClass11_2.b2() at System.Web.Mvc.Async.AsyncControllerActionInvoker.AsyncInvocationWithFilters.<>cDisplayClass11_2.b2() at System.Web.Mvc.Async.AsyncControllerActionInvoker.AsyncInvocationWithFilters.<>cDisplayClass11_2.b2() at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>cDisplayClass7_0.b1(IAsyncResult asyncResult) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResult1.CallEndDelegate(IAsyncResult asyncResult) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResultBase1.End() at System.Web.Mvc.Async.AsyncControllerActionInvoker.EndInvokeActionMethodWithFilters(IAsyncResult asyncResult) at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>cDisplayClass3_6.b4() at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>cDisplayClass3_1.b1(IAsyncResult asyncResult) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResult1.CallEndDelegate(IAsyncResult asyncResult) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResultBase1.End() at System.Web.Mvc.Async.AsyncControllerActionInvoker.EndInvokeAction(IAsyncResult asyncResult) at System.Web.Mvc.Controller.<>c.b152_1(IAsyncResult asyncResult, ExecuteCoreState innerState) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncVoid1.CallEndDelegate(IAsyncResult asyncResult) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResultBase1.End() at System.Web.Mvc.Controller.EndExecuteCore(IAsyncResult asyncResult) at System.Web.Mvc.Controller.<>c.b__151_2(IAsyncResult asyncResult, Controller controller) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncVoid1.CallEndDelegate(IAsyncResult asyncResult) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResultBase1.End() at System.Web.Mvc.Controller.EndExecute(IAsyncResult asyncResult) at System.Web.Mvc.Controller.System.Web.Mvc.Async.IAsyncController.EndExecute(IAsyncResult asyncResult) at System.Web.Mvc.MvcHandler.<>c.b__20_1(IAsyncResult asyncResult, ProcessRequestState innerState) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncVoid1.CallEndDelegate(IAsyncResult asyncResult) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResultBase1.End() at System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) at System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStepImpl(IExecutionStep step) at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)`

All 7 exceptions were exactly the same as above.

The index gets locked due to load.

Please give me an update on the current status of this issue including the next steps, since this issue is business critical. How can I contribute on this to find a solution?

Isn't this issue related to the exceptions we just recorded? https://github.com/umbraco/Umbraco-CMS/issues/8433#issuecomment-740480759

Shazwazza commented 3 years ago

@deekoulos - This is a different issue entirely that you haven't previously submitted. So looks like we've solved your previous issue(s) and perhaps you have new underlying ones.

Lock request time out period exceeded

Has to do with distributed locks when writing/reading from the database at the same time as another thread/process. This means that while something else is holding a write lock, something else is waiting for the readlock and it cannot acquire it because it waited too long. Normally do to too many write locks trying to be used or queries executing slowly.

Assuming google is indexing your site, all it's doing is reading - but in your stack trace above there is a write. Why is your application writing when a page is just being rendered?

NPoco.Database.NPoco.IDatabaseHelpers.ExecuteNonQueryHelper(DbCommand cmd) at NPoco.Database.Execute(String sql, CommandType commandType, Object[] args) at NPoco.Database.Execute(Sql Sql) at NPoco.Database.Execute(String sql, Object[] args) at Umbraco.Core.Persistence.SqlSyntax.SqlServerSyntaxProvider.WriteLock(IDatabase db, TimeSpan timeout, Int32[] lockIds) at Umbraco.Core.Persistence.SqlSyntax.SqlServerSyntaxProvider.WriteLock(IDatabase db, Int32[] lockIds) at Umbraco.Core.Scoping.Scope.WriteLock(Int32[] lockIds) at Umbraco.Core.Services.Implement.MemberService.Save(IMember member, Boolean raiseEvents) at Feedbax.Core.Controllers.RenderMvc.ProfileController.Index(ContentModel model) at lambda_method(Closure , ControllerBase , Object[] ) at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters) at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary2 parameters) at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary2 parameters) at System.Web.Mvc.Async.AsyncControllerActionInvoker.<>c.b__9_0(IAsyncResult asyncResult, ActionInvocation innerInvokeState) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResult2.CallEndDelegate(IAsyncResult asyncResult) at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResultBase1.End() at

Feedbax.Core.Controllers.RenderMvc.ProfileController.Index is Saving a member?

The index gets locked due to load.

That isn't related to this exception.

Please give me an update on the current status of this issue including the next steps, since this issue is business critical. How can I contribute on this to find a solution?

Steps to replicate locally is what is required. I've been assisting with my own time. I'm unsure if you have a support agreement with Umbraco but if so then I would suggest raising an issue there else the best I can do is just keep trying to assist but at the end of the day,. it needs to be replicated locally. You just may need to create a lot of test data locally (possible example https://github.com/umbraco/Umbraco-CMS/blob/v8/contrib/src/Umbraco.TestData/UmbracoTestDataController.cs) and then run something like an artillery script (i.e. what a bot would do).

deekoulos commented 3 years ago

Hey @Shazwazza thanks for your response. You are right, this is a different issue.

Indeed there are read and write operations within the index action, for updating the members every 4 weeks based on current external API data. Based on your clarification I have a good idea now to resolve this one (Lock request time out period exceeded). Thank you!

The original index issue "AlreadyClosedException" seems to be resolved, but please consider the new index related issues mentioned 11 days ago, which are new and were raised after installing the latest version of Examine and seem to be unresolved yet: https://github.com/umbraco/Umbraco-CMS/issues/9386#issuecomment-760279259.

I will observe if they will be raised again in the next 2-3 weeks and keep you updated.

deekoulos commented 3 years ago

Hi @Shazwazza,

there is an update on this. We registered the following exception:

System.InvalidOperationException: "The required index MemberIndex was not found"

It was caused by an ahref-bot: (Mozilla/5.0 (compatible; AhrefsBot/7.0; +http://ahrefs.com/robot/))

The requested URL was page 7 of a category page, which means, the bot was crawling the whole category and it's underlying pages (10 items per page), when this exception occured.

this is the code:

            if (!ExamineManager.Instance.TryGetIndex(Umbraco.Core.Constants.UmbracoIndexes.MembersIndexName, out IIndex index) || !(index is IUmbracoIndex))
                throw new InvalidOperationException("The required index MemberIndex was not found");

So this should not be related to the code itself since its standard and works fine, but it seems that bot requests, causing some parallel-loads, make the index not accessible or something.

Shazwazza commented 3 years ago

@deekoulos I really very highly recommend that you setup load testing locally on your machine that mimics a bot crawl. That really shouldn't be very hard, there are plenty of tools to do this. I like to use https://artillery.io/ . Replicating these issues locally is a requirement to fix things. So far I've been lucky to just be able to guess things for you. I would recommend checking your logs around when this error occurs for things like app restarts or for some reason if you were rebuilding indexes during this time. The TryGetIndex call checks internally in a ConcurrentDictionary. This dictionary never has items removed from it, only items added to it. The only way it cannot exist is if the index is being requested from the manager before it's added (i.e. during startup). A full stack trace of errors is much more helpful than just the exception message.

I would also strongly recommend to not use ExamineManager.Instance and instead use Dependency Injection to inject IExamineManager.

I'm going to close this issue because the initial issue is fixed. I will release the latest Examine 1.1.0 today.

If you have further information on this latest issue/error with stack traces and steps to replicate please open a new issue with those details.

timeverts commented 3 years ago

@Shazwazza, I can confirm that the beta 1.1.0 version has been running on our production website for the past 2 weeks and to my knowledge the IndexReader is closed exception has not reoccurred. So thanks so much.

I have some last queries:

  1. When will Examine 1.1.0 be available on the official Nuget channel?
  2. Can Examine 1.1.0 be used on the latest version of Umbraco 7?
Shazwazza commented 3 years ago

Many thanks @timeverts

When will Examine 1.1.0 be available on the official Nuget channel?

This week

Can Examine 1.1.0 be used on the latest version of Umbraco 7?

No. A version 0.2.0 is pending. It does not include this fix but will need to be back ported, though it might not affect it in the same way since that is a different version of Lucene. I'm unsure when I'll get that release out.

deekoulos commented 3 years ago

@Shazwazza thanks for your support. I can also confirm that the initially reported issues are history.

I just have one last question. You mentioned that the ExamineManager should be injected instead of using ExamineManager.Instance. Could you please elaborate, why you recommend this approach since in the docs the ExamineManager.Instance is still mentioned: https://our.umbraco.com/Documentation/Reference/Searching/Examine/quick-start/