Closed maartenba closed 11 years ago
Not sure what it will get us by dropping those dependencies, but hey if you can do it - would be great. Adding the dependency injection would be great. Nuget split packages would help a bunch
Features: Implement an httpmodule that can be plugged in for development with url of sitemaps.axd. That would give a full diagnostic view of the tree and relationships. Scan for dynamic node implementations? https://github.com/maartenba/MvcSiteMapProvider/issues/98
I think more than features, there are many bugs/issues:
Memory leak issues - https://github.com/maartenba/MvcSiteMapProvider/issues/118 Documentation missing many features and solutions Potential default implementation changes - https://github.com/maartenba/MvcSiteMapProvider/issues/115 Order of menu items defined in code in relation to the xml file - https://github.com/maartenba/MvcSiteMapProvider/issues/114 Some way to specify visibility (and other custom attributes) on the SiteMapNode attribute https://github.com/maartenba/MvcSiteMapProvider/issues/113 Event not firing - https://github.com/maartenba/MvcSiteMapProvider/issues/101
My latest code changes you just accepted fix this, so it should be closed: https://github.com/maartenba/MvcSiteMapProvider/issues/102
Many 'issues' open that should be closed - but I think you are only one with rights to get these cleaned up. In general, just more activity and reviving of this great project! :-)
Here is my wish list:
That said, I am not willing to wait until you get around to it to make a version I can use. I'd be happy to contribute what I come up with back into this project if there are no objections.
I was actually looking into creating another sitemap provider to implement the fuctionality so I can reuse the UI and sitemaps XML functionality that is already here, but based on your comments about dropping the dependency on System.Web perhaps it would be best to start off with some kind of Sitemap Loader interface to handle the starting point where the provider would normally go? I am open to suggestions.
Of course, I could just make a standard SiteMap provider that is just a shell that delegates its work to other injected classes and then later the SiteMap provider can be dropped without having a huge impact on the rest of the project.
I’m working on some initial bits for vNext, if you want I’ll post them on a separate brancg.
The idea there is to:
Use IDepedencyResolver out of the box
Have a node tree in memory which is mapped to a SiteMap for every request. This should provide more flexibility and better multithreading. Also multi-tenant aspect is easier to do based on that.
Sure, I'd like to see the latest and greatest. Thanks.
See branch "v4". Do note that this is very much work in progress and that no final product (nor working application) is in there. It does give you some pointers of where I'm heading.
I took a look and had a few thoughts/questions:
My hope is to introduce a couple of seams into the provider 1.) to allow the tree (or a tree provider) to be injected by the host application and 2) to allow the configuration options to be injected by the host application. Naturally, it makes sense for them to both default to loading from the Mvc.sitemap file and the web.config file, respectively. It also looks like it makes sense to have a dynamic node provider (seam) that defaults to an assembly scanner.
It looks like you created an interface named ISiteMapSource that would work great for the tree injection, but there is no way to inject a custom provider other than through the config file.
I realize this is just prototyping, but what was the reasoning of using the ASP.NET sitemap provider as a base? That is what I had in mind as well until you mentioned you might do otherwise.
Pros: ASP.NET sitemap provider provides a clean seperation between the "data access" code and the presentation logic. ASP.NET sitemap provider is an interface that many other developers are familliar with.
Cons: YAGNI - ASP.NET sitemap provider technically doesn't need to be there. ASP.NET sitemap provider is somewhat difficult to inject dependencies into. It could be overcome by using property setter injection or by raising events to provide dependencies (for those who have not yet entered the DI world).
Personally, I have no preference if the ASP.NET provider stays or goes, but clearly the project needs to go in a different direction if the provider is not used.
Finally, how much time do you have to dedicate to this project? The reason I ask is I could either start working with this full time now or circle back around after about 3 weeks of working on other bits of my app and join in when you have more of a base to work with. Of course, if this project doesn't change much in that timeframe it doesn't make much sense for me to wait.
I was thinking of building the provider in 3 stages.
That last stage is of concern to me: drop it or not? I would like to drop it but this breaks anything out there that uses sitemaps as the data source.
I'll be traveling the next weeks so time will be limited, feel free to do some prototyping already. Earlier feedback is better imho :-)
Agreed. Dropping the provider trashes any chance for a smooth upgrade. But then, it could also make the library easier to work with for newcomers.
Looks like we are pretty much on the same page.
Just so you understand my end game, I am planning to make an end-user drag and drop tree interface that can be used to create/rearrange the site structure. There is a fully-functional sample of exactly what I would like to achieve here: http://www.bigace.de/demo.html. Have a look at the v3 administration interface and go to Pages > Administration from the menu. The pane on the left allows for creating and moving of pages. All changes are reflected in the main menu of the site and the site map path.
Note the following:
The tree isn't simply translated, there is a separate tree structure for each language.
Anyway, I will get started working on a prototype. For now, I will assume the provider will stay, but my provider implementation will just be a shell that can be removed without too much trouble. I will try to carry forward your ideas, but my main focus will be in creating an implementation that can be easily integrated with a custom business layer and by extension, a custom data source.
On 1.2: that's just a step which allows you to plug in between tree creation and tree usage.I was thinking to allow things like node optimization (e.g. for some nodes the url can be made static for all requests instead of having to resolve it all the time). It also allows you to restructure some things, or replace the entire tree with another one. Look at it as the last chance to do anything with the tree that applies to all requests from then on.
But yes, seems we're on the same page. Looking forward to some prototypes!
On a side note: I'm planning on keeping the current SiteMapXmlSiteMapSource to have at least some backward compatibility. Other than that feel free to cut the cords with the ASP.NET provider model.
I do not see any real problem with dropping ASP.NET provider model support. The only things that used it were webforms based anyway.
@maartenba
I am pursuing the path of dropping support for the ASP.NET provider. This turned out to be a pretty deep rabbit hole of rooting out necessary bits from the .NET framework, but it is clear that to gain control over the lifetime of the in-memory sitemap structure it is easier if it is not part of the provider.
The next step is to try to make what I have function the same way it did before by delegating calls from an actual provider. However, although I have a fairly good idea what the dynamic nodes and assembly based (reflection) nodes are for at this point, there doesn't seem to be any samples for the reflection based nodes in the project and the samples provided for dynamic nodes are very basic. Do you have a working demo project you can provide so I can see how this works?
I still have a fair amount of refactoring to do after getting the code functional, but once I have working code, keeping it in a working state should be much easier.
Nothing I can share. The reflection basednodes can be created using one of the [MvcSiteMapNode] attributes, important is to specify their ParentKey property to build the tree structure. Dynamic node providers are in the MvcMusicStore sample, although they indeed are pretty basic. In theory you can add child nodes to the nodes generated there and so on.
Yesterday, I finally got this to a point where it is functioning without using the ASP.NET provider. I ended up dropping the ASP.NET provider altogether because it was just getting in the way of seeing the big picture. A compatibility layer (builder) can still be added to load the data from an alternate source (such as an ASP.NET sitemaps provider) if needed. If you are interested in having a look, the latest can be found here: https://github.com/NightOwl888/MvcSiteMapProvider/tree/v4, though it isn't very pretty at this point, is poorly documented, and the localization code isn't yet fully implemented so nothing shows up for localized nodes.
My vision is to have an object graph of smart business objects whose purpose is to maintain the hierarchial relationships between nodes. The outside world should have no direct access to this data - it should only have access to methods and properties that all go through the business logic layer within (or injected into) the object graph. If you are familliar with the CSLA framework, this is sort of what I have in mind for managing the behavior associated with the tree structure itself (without the dependeny on CSLA, that is). Here is an overview of these objects.
SiteMap - Replaces SiteMapProvider and is essentially the root object of the graph. SiteMapNode - Replaces the original SiteMapNode in System.Web. SiteMapNodeCollection - Replaces the original SiteMapNodeCollection in System.Web. SiteMapNode has other child objects (such as Attributes, RouteValues, etc) that do not yet have their own implementations, but probably will have.
As for the rest of the code, I envision a set of services whose default implementation can easily be swapped by a custom one by simply creating an implementation and then changing the DI configuration to inject them. This will allow for a wide variety of use cases.
As it turns out in a multi-tenant application, deciding when a request should belong to one tenant vs another is an application-specific thing. Therefore, the logic of when a new sitemap should be built and cached needs to be easily overridable by the application. I made a SiteMapLoader to handle this task, but as it stands it is still broken because it allows the UI to override the "current" sitemap in some cases but not others. It also doesn't have a very robust way to map a specific SiteMapBuilderSet named configuration to a given request. This is something I will address shortly.
Semantics:
Now some decisions need to be made to shape the direction from here. Some things jumped out at me as being unusual, but it is difficult to tell exactly what the underlying reasons for their being are. Other things just need to be put out there for discussion.
As for referencing IDependencyResolver, I just didn't see any need. Using constructor injection, all dependencies can be provided by the DI container directly, and doing so gives the DI container control over the lifetime of the objects. This has a distinct advantage here - if we have a project with 10,000 nodes and we want to inject a UrlResolver service into those nodes that contains custom logic, nothing is gained (but a lot of memory and/or CPU cycles lost) by creating 10,000 UrlResolver service instances using IDependencyResolver. On the other hand, using the DI container to supply these dependencies can ensure that only 1 instance is ever created regardless of how many nodes use it.
To support cases where no DI container is used, I plan on making a basic object composer to do so that contains the logical defaults. It would read configuration values from specific AppSettings keys and use them to configure similar options to what are available now. It would just require a single line of code to be placed in the Application_Start method, such as Composer.Compose(). That said, I don't plan to spend that much time on this because it is not something I will need. A lot could be done here - custom configuration section for MvcSiteMapProvider, configuration for custom implementations of interfaces put into the config file, etc. However, being that this class would be totally replaced if a DI container were in use, you may want to encourage developers to use DI if they have a pressing need to override an implementation with another. Alternatively, this could be built out into an effective way to configure MvcSiteMapProvider without using a DI container.
Ok, that turned out to be very long-winded. The sooner I get feedback on these points, the more likely the feedback is to influence the direction I go. Thanks.
One other issue I have....is that many times I can have the same sitemap node in several different trees. Imagine a simple example where you had the same menu item in two different areas. This makes sort of a 'circular' structure. Wonder if that can be part of this new design?
@waynebrantley
Is this different that "Multiple paths to the same node" I have in my wish list above? This sounds like a similar scenario, or at least the way you describe it as "same sitemap node in many trees" sounds like the same solution I came up with before - creating multiple XML files and using session state to track the current one, overriding it to the "default" if anything but a certain set of pages were requested next chronologically.
In short, my solution was to fix an issue with a many-to-many relationship between categories and products where one product could belong to many different categories.
I plan to address this, but not until after a few more rounds of refactoring. I haven't thought of a way to do it without using some kind of user session based tracking, though. Well, there is using the URL querystring to track the path, but that is not SEO friendly at all and is not a path I plan to take. Let me know if you have any other ideas as to how this could be implemented. The main issue at hand is how to tell which path you are at in the map if a node is requested that exists in 2 different places?
I guess another possible solution to this would be to actually put the same exact product on multiple unique URLs. This would force the burden of properly using the canonical meta tag onto the developer, but since this is something that is supported by both Google and Bing already this could potentially be a solution.
Wow - the light bulb just came on. I never really considered making the canonical meta tag generation a part of the sitemap, but now that I think about it, it makes perfect sense to make a "CanonicalHelper" in the same way the "TitleHelper" works.
@NightOwl888 - yes that is what is needed....a many to many relationship would be great. When it exists in two places and you need to 'find' where you are in the sitemap, I was thinking they would have a priority - so the FIRST item in the many-many relationship would be the one that it is in. So for your example if a product was in categoryA and categoryB...and you went to the product page...it would show categoryA as the parent - if category A was the 'first' link in the many-many relationship.....This could of course be an injected piece of code where users could decide some other logic based on their specific requirements.
@waynebrantley - lets think about a side effect of that for a minute. If I were a customer browsing your site and first navigated to categoryB, the sitemap path would be something like "Home > CategoryB". Now I choose the product (lets just call it Product1) from the categoryB page. When the product page were displayed, it would show the sitemap path as "Home > CategoryA > Product1". It just jumped from categoryB to categoryA for no apparent reason.
To me it feels like this behavior is broken and it is likely to cause confusion to the user, would you agree? This is where I took a different path to solve the issue before, and ended up swapping different XML structures altogether, keeping track of the "current" one in session state.
2 other solutions I can think of:
Anyway, as you can see I am leaning toward the second solution, but that isn't to say the first one can't be worked out as well. Some people probably wouldn't mind resorting to session state and swapping as long as the site had the exact semantics they wanted - that is, each product existing on exactly 1 URL. It is clearly more complicated that way, but as I said, I worked it out once before.
Session state is not a realistic option in my opinion.
Well, stateful doesn't necessarily mean session state. Using a cache or a static member with a user-specific ID is one option or storing the state in a cookie is another. Of course, in the end they all use cookies to manage the user ID anyway. Realistically all that is needed is a short string identifier to track the ID of the cached sitemap the user last navigated to, so it would not be very expensive in terms of memory. A cookie might be a good fit in this case.
I am just trying to avoid storing extra information in the URL because that would be the kiss of death for SEO.
Viewstate is not an option because the information we need is lost going from page to page.
There is another option, but it is not very reliable in my experience - that is to use the URL referrer header to determine the last position the user navigated from rather than keeping it in state. Unfortunately, not all browsers reliably send this value and some may have different rules about whether to send it based on whether or not you are navigating within the same domain. It might be worth a shot, though - maybe there is a way to make it more reliable.
In your opinion, is the use of multiple URLs for the same product with a handy tool to manage the canonical tag on the extra URLs a realistic option?
Yes, I think that is realistic.
I would make it a small pluggable component. The idea with v3 was to use specify this scenario in your action method by fiddling with the sitemap structure directly although having it stored in a cookie by default seems more logical to me. (note "by default", would be great to be able to plug in one that runs on top of session state / database / whatever the developer prefers)
Actually, after doing some research, it seems that the best approach would be a combination of the ideas I have been toying with. According to http://forums.asp.net/p/1580897/3985044.aspx, a common approach to the problem is to simply tack on extra position info to the URL. In MVC terms, that would just be an extra parameter in the route.
This would normally be a problem for SEO. However, if there were also a way to manage the canonical tag using a similar component to the TitleHelper and a logical way to choose the primary one in the sitemap node/XML structure, then this wouldn't be so much of an issue. Essentially, this tool could be used to tell the search engines that the URL with the extra positioning info is equal to the URL without it, and then all the pieces fit, no state is necessary. Then it would not really matter if you want to use the same URL with different query string parameters or multiple URLs for the page - this choice can be made by the developer at whim.
One thing is pretty clear, the sitemap is a good place to manage the canonical tag data to make it virtually automatic. There is only 1 way to declare the canonical tag and this behavior is shared across all web sites.
@maartenba
I am now several rounds of refactoring in, but have still not tackled the fact that HttpContext being passed around as API parameters isn't a testable approach.
I see that Microsoft's answer was to create an abstract HttpContextBase with a similar interface, with the concrete type HttpContextWrapper to fill its shoes. However, I also noticed that there are not one, but 2 overridden HttpContextWrapper classes in this project - HttpContext2 and HttpContextMethodOverrider.
It appears as if the HttpContext2 implementation is supposed to cover the general case for MvcSiteMapProvider as it is used in every place in the project but one, is this a correct assumption?
However, the other implementation (along with its partner, HttpRequestMethodOverrider) is a bit of an enigma. It overrides the HttpMethod member of the request object, but only in the case where it is instantiated with an HttpMethod passed to its constructor (second parameter) that is not null. However, the only place it is used in the project, it is declared like this:
HttpContextBase httpContext = new HttpContextMethodOverrider(context, null);
So, in effect we have a general HttpContextWrapper class that is exactly the same as Microsoft's implementation. I know there must have been some reasoning behind this, but it wasn't left behind in the code. If you can explain this, please do.
Anyway, I am thinking about taking the route of creating an injected abstract factory that can both create a generic HttpContextWrapper and wrap it in one or both of the other layers if necessary (probably with a decorator pattern). However, both of these classes desperately need new names to clarify their intent. I am thinking about going with either MvcHttpContext or MvcSiteMapHttpContext for HttpContext2 (I am not sure which level it applies to or if it even covers the general case). It isn't clear whether the other one should just be swapped for HttpContextWrapper and scrapped or if there is some future use for it.
One more thing. Typing "new HttpContextWrapper(HttpContext.Current)" or similar every time we need it passed into the SiteMap API is not exactly brief or intuitive. So should the external interface be changed to HttpContextBase, or should there be a method that accepts HttpContext that calls a factory internally to wrap HttpContext and then cascades the call to a protected method that accepts HttpContextBase? I am leaning toward the first approach, as in my book less code is better and the primary user of this will be the UI layer in the same project so the non-intuitive API is not such a big deal.
The HttpContext2 implementation covers one fix for a closed property in HttpContextWrapper so yes, that assumption is correct.
HttpContext2 provides us with the option of having a HttpRequest2 returned for the Request property, which in turn provides us with the possibility to implement our own AppRelativeCurrentExecutionFilePath and CurrentExecutionFilePath properties, required for having UrlBuilder generate the correct URL for a given set of RouteData even when that set of RouteData is not active as the current requested URL.
The Overrider classes do something similar but good question, I think they can be merged with the 2 classes.
I think we should be using HttpContextWrapper everywhere but have it instantiated either on demand if some custom properties are required for some things, or via the IoC container which can provide us with new HttpContextWrapper(HttpContext.Current)
Thoughts?
I think we should be using HttpContextWrapper everywhere but have it instantiated either on demand
if some custom properties are required for some things, or via the IoC container which can provide us
with new HttpContextWrapper(HttpContext.Current)
For a while I have been considering just removing HttpContext from the API altogether and passing it in as a constructor argument only to the classes that directly use it. However, there are some methods where the intent can be made less ambiguous if the parameter is included.
For example, I made a method that is used to create a sitemap cache key that is used to determine whether the current request should access a cached sitemap or store a new one. Without being passed HttpContext, it is difficult to understand how the method should be implemented.
Presently, there are only 3 methods each in the interfaces of ISiteMap and ISiteMapNode where HttpContext is explicitly passed in. Technically speaking, these could all be eliminated. In fact, neither of these classes use HttpContext directly - it is simply passed along to other services that use it. The question is, would Node.IsAccessibleToUser() be any less clear than Node.IsAccessibleToUser(HttpContextBase)? Or more properly, is there any realistic reason to pass in a different HttpContext than the current one?
For places where HttpContext.Current is grabbed out of thin air instead of being passed through the API as a parameter, I agree that it should be passed as a constructor argument that is injected by the DI container.
It would not be less clear, both are viable imho
// From ISiteMap
ISiteMapNode FindSiteMapNode(string rawUrl);
ISiteMapNode FindSiteMapNode(HttpContext context);
ISiteMapNode FindSiteMapNode(ControllerContext context);
ISiteMapNode FindSiteMapNodeFromKey(string key);
bool IsAccessibleToUser(HttpContext context, ISiteMapNode node);
One thing is for sure, FindSiteMapNode() with no parameter is ambiguous. I suppose in the spirit of things it could be renamed FindSiteMapNodeFromCurrentContext() to make it clear. Thoughts?
// From ISiteMapNode
RouteData GetRouteData(HttpContextBase httpContext);
bool IsVisible(HttpContext context, IDictionary<string, object> sourceMetadata);
bool IsAccessibleToUser(HttpContext context);
GetRouteData() also seems like it might not make sense without context being passed in, or alternatively being renamed to GetRouteDataFromCurrentContext(). I also discovered that this method uses the context parameter directly internally. It matches the signature of the GetRouteData() method from RouteTable.Routes. Should probably leave that one alone, as it already uses HttpContextBase.
However, it looks like the other methods still make perfect sense without the extra fluff. On the other hand, I have this sinking feeling that Microsoft had some master plan that I am not aware of and there was some better reason for those context parameters being passed than you or I can come up with in a few minutes.
Their main idea was to move from System.Web to System.Web.Abstractions to make things testable. No idea for all the passing around though :)
The only thing I can come up with is continuity. Without the parameter, there would be no way for the presentation or UI layers to choose the specific context to pass along to the service that uses it. Using DI, a strategy pattern or similar could be used to choose a configured context object, but it would still require a parameter to choose a unique instance.
That said, there are a few methods and properties (from Microsoft) that simply grab HttpContext.Current out of the air and don't give the UI layer a choice. Specifically, they all call IsAccessibleToUser(). I think that one (or 2, rather) is safe to change.
IsVisible cascades the call to the visibility provider. That one is safe to change, I think - assuming nobody would ever want to pass anything other than HttpContext.Current.
The only one up in the air is FindSiteMapNode(HttpContext). It just doesn't feel right calling a Find() method without passing any criteria. I think renaming it might make it OK as long as the name is clear where the criteria is coming from. I guess if anyone screamed that they need a way to pass a different context, an additional method could be added to the interface later.
Naming Conventions
Speaking of names, I was also thinking that it might be a better choice in the DI world to name the "Default" concrete class according to the conventions that many DI containers use. For example, with StructureMap nothing extra needs to be configured if you have an interface named ISomething and a class named Something - the container already knows implicitly which one is the default.
Using a naming convention where all of the default concrete instances is prefixed with "Default" is supported, but requires extra configuration. Naming the default instance after the interface is supported out of the box, and from what I have read, some other DI containers do this as well.
Thoughts?
Okay with that. ISiteMapProvider – SiteMapProvider is most logical I think.
Well, I've dropped the "Provider" suffix because it no longer implements the provider pattern, but you get the idea. :)
@maartenba
I found an inconsistency that I am not sure is intentional:
// From DefaultSiteMapProvider
// Fetch route data
var httpContext = new HttpContext2(context);
if (routeData != null)
{
RequestContext requestContext = new RequestContext(httpContext, routeData);
VirtualPathData vpd = routeData.Route.GetVirtualPath(requestContext, routeData.Values);
string appPathPrefix = (requestContext.HttpContext.Request.ApplicationPath
?? string.Empty).TrimEnd('/') + "/";
node = base.FindSiteMapNode(httpContext.Request.Path) as MvcSiteMapNode;
// More stuff here...
// From DefaultSiteMapNodeUrlResolver
if (HttpContext.Current.Items[UrlHelperCacheKey] == null)
{
RequestContext ctx;
if (HttpContext.Current.Handler is MvcHandler)
ctx = ((MvcHandler)HttpContext.Current.Handler).RequestContext;
else
ctx = new RequestContext(new HttpContextWrapper(HttpContext.Current), new RouteData());
HttpContext.Current.Items[UrlHelperCacheKey] = new UrlHelper(ctx);
}
return (UrlHelper)HttpContext.Current.Items[UrlHelperCacheKey];
If you draw your attention to the RequestContext creation, it is simply created directly in DefaultSiteMapProvider, but in DefaultSiteMapNodeUrlResolver, it is preferred to get the current MvcHandler.RequestContext if it exists.
What I am wondering is if this should be consistent every time a RequestContext is required and if so, which of these methods is the correct way of retrieving it?
Note that AuthorizeAttributeAclModule also uses the same approach as DefaultSiteMapProvider.
Not sure why we did that. Perhaps history on mvcsitemap.codeplex.com reveals something?
Well, I ended up just leaving it as is because it is the safest route (although I changed the new RequestContext to be created by a factory instead of newing up directly).
FYI - I have implemented the Visitor stage. I ended up reusing the ISiteMapBuilder interface that was already there, so the recursive loop happens in VisitingSiteMapBuilder and then it calls the Execute() method on ISiteMapNodeVisitor, who can do anything it wants with the node. Although the only working visitor that is included is UrlResolvingSiteMapNodeVisitor, I have also included a CompositeSiteMapNodeVisitor so they can be configured in a chain.
You can check out the latest here: https://github.com/NightOwl888/MvcSiteMapProvider/tree/v4
If you have a look, let me know if you find anything in the wrong namespace. I tried to fix that today, but I may have overlooked or misinterpreted something.
Note that securityTrimmingEnabled is now an attribute on the Sitemap XML file, and the Global.asax in the demo is intentionally blown out of proportion as a reminder to myself of what needs to be implemented in the default poor man's DI container that will ship with this project.
@maartenba
Update
New Features Implemented:
I also updated the documentation so (almost) every class at least has a sentence or two to describe what it does (at least for the classes that I know!).
Anyway, working out the cache helped make it clear why Microsoft had code to lock all of the properties so they couldn't be changed. This was so the cached copy wouldn't get inadvertently altered by one request which would show up in all of the other requests.
That said, I found something that is broken - the SiteMapTitleAttribute. This class uses the Title attribute of a SiteMapNode to temporarily override the Title for a specific request. However, since it is altering the cached copy it is changing it for all other requests. Even worse, if the cache gets refreshed between the method that stores the original value and the one that sets the Title back to the original value, it could overwrite the title with something that is out of date.
I think I now know where you were going with the whole clone-per-request idea (a direction I haven't gone) - it could make a writable copy of the SiteMap that is only valid for the current request, but that seems like it might cause a lot of extra overhead for what will likely be rarely used features.
Thoughts?
The Road Ahead
I am now working on a prototype of a complex e-commerce application to be used as a proof of concept that all of the features are implemented in the MvcSiteMapProvider to support the project I am working on (or to put them in if they are still missing). The idea is to make something that is organized more like a real-world e-commerce application than anything Microsoft has come up with.
Planned Features:
I think we do need one writable tree for every request, for example when using SIteMapTitleAttribute. I personally use it all the time. Perhaps we can have a clone only when a write to a node is done after initialization? Or even have a clone per request (which would add more overhead if you are not using any of the attributes).
Looking forward to the sample app.
Is there really any real reason to make properties writable other than Title? Writing to some of them could cause other problems.
I just came up with another idea that would work. The top layer of the inheritance tree is a RequestCacheableSiteMapNode. I could simply make that layer store the title for the current request in the request cache if the sitemap is read-only (which indicates the caller is the presentation layer) and read it back from the request cache in the case where the value exists, otherwise cascade the call to the base class.
The only issue with this I see is that if someone were to override SiteMapNode, they would likely want to bypass the RequestCacheableSiteMapNode and inherit from its base class so their logic is not based on a request-cached copy of the base. But then, they would probably want to re-implement the request caching anyway as it helps performance a lot, so they would likely copy this logic into their own class.
I tried to use a decorator pattern so these layers could be arranged in any order and new ones inserted, but this turned out to be not viable because of cross-method calls and passing references to the current object (which is a couple of layers down from the outside layer). It was very brittle. Also, the inheritance model is much more intuitive.
Anyway, I am not opposed to the idea of making a writable tree if there are other good reasons for it existing.
Good point. I guess it would make sense but we’d have to seal the sitemap node if we go that route (and I guess people will be upset about that).
We can use RequestCacheableSiteMapNode to store the title, and put the responsibility of being compatible with it with the developer overriding SiteMapNode. Another would be to have a visitor pattern running right before the view gets rendered which updates the title for that request.
On a side note, I’ve been doing custom route value overriding as well…
Hmm... do you mean sealing the RequestCacheableSiteMapNode? Actually, that might make sense because inheriting from it might cause seemingly random issues to show up from getting using a cached copy instead of calling the underlying logic. If that is not what you meant, I am not sure I understand your reasoning.
Inheritance tree for SiteMapNode:
SiteMapNodeSecurityBase SiteMapNodePositioningBase SiteMapNode LockableSiteMapNode RequestCacheableSiteMapNode
Inheritance tree for SiteMap:
SiteMap LockableSiteMap RequestCacheableSiteMap
Actually, allowing such custom action filter attribute classes to created that write properties on RequestCacheableSiteMapNode might be a good idea. Potentially writable properties for SiteMapNode:
ParentNode-- Title++ Description++ TargetFrame++ ImageUrl++ LastModifiedDate ChangeFrequency UpdatePriority VisibilityProvider++ DynamicNodeProvider-- Clickable++ UrlResolver++ Url-- CacheResolvedUrl-- CanonicalUrl++ CanonicalKey++ Route Area Controller Action
I have marked all of the writable properties I think might be a good idea with ++ and the ones I think would be a bad idea with --. I am not sure about the rest, and value your opinion. Keep in mind that all of the values written would simply be written to the request cache and would not affect the underlying values.
The RequestCacheableSiteMapNode was originally meant to make certain expensive read-only values stay in the request cache so multiple calls during the same request would not have to use the expensive logic again, but making writable values from the presentation layer doesn't really invalidate its purpose, which is to store values in the request cache.
I think these may make sense:
ParentNode-- Title++ Description++ TargetFrame++ ImageUrl++ LastModifiedDate ChangeFrequency UpdatePriority VisibilityProvider++ DynamicNodeProvider-- Clickable++ UrlResolver++ Url-- CacheResolvedUrl-- CanonicalUrl++ CanonicalKey++ Route++ Area Controller Action
Added Route in there, since it sometimes makes sense. Take the following sitemap:
o Category
Imagine I do not want to build a dynamic node provider but still be able to set the product category id from the product page.I’d want to do that in the Route.
Or shall we guide people to use a dynamic node provider anyway?
I think building the node provider should be the preferred choice. All the same, the more consistent with making things request-writable, the better.
Any non-writable properties will get an exception (thrown by LockableSiteMapNode) indicating it as such.
What about the other properties?
The list you gave makes perfect sense. We can always open up more later if it proves needed.
Ok, that was straightforward, but I discovered yet another culprit that writes to the node - SiteMapPreserveRouteDataAttribute. Since it writes to the RouteValues collection (exposed through a read-only property, which is why it wasn't on my list before) it is not as straightforward to fix.
Basically, it looks like I would need to implement a whole new request-cacheable collection that knows how to switch to an alternate request-cached source than what is provided by its base class (I think, anyway). RouteValueCollection is based on an ObservableDictionary, but that doesn't throw events until after it has already changed - a bit too late in this case.
Based on the note at the top indicating that it cannot be used in conjunction with dynamic node providers, I have to ask - is it worth salvaging?
If we only support dynamic node providers we don’t need it. But maybe we should have it in the design in case requests for it pop up.
I was looking at the LockableDictionary class and thought of a way to make a new dictionary class copy the dictionary values into a new internal dictionary and store it in the request cache and then retrieve it upon each subsequent request, similarly to how the other properties do it. Well, I haven't worked out how to generate a key that can be used to get the value back for the specific node..., but I think I can do that.
I did run into another snag though. The dictionary is declared as --string, object--. There would be a problem if the dictionary contained a reference type because the request cache would simply point to the same object that is in the real cache.
Oddly, looking at the comparison logic, it is converting everything to string anyway. Is there any reason why it couldn't be declared as --string, string--? Keep in mind, it is read-only to preserve encapsulation, so it can't simply be assigned a --string, object-- dictionary from somewhere else in the framework - it would have to be iterated in which case it could also be converted.
Converting to string can be done for sure.
In the end, converting from object to string wasn't necessary. There is a CopyTo() method that I reused that already throws an exception if a reference type is detected.
However, creating a dictionary that would work was no easy task (I'll send you the bill :)). I ended up making it a base class named RequestCacheableDictionary that could be inherited by the specialized dictionaries - I know, it is totally inconsistent with the rest of the framework. Since read and write operations each had unique behavior, I ended up creating 2 different properties - ReadOperationDictionary and WriteOperationDictionary which return the correct reference to either the cache or the base class. I also added an inheritable boolean property that can switch off request caching entirely, so turning it off or changing the way it is done from an inherited class can be done without too much trouble.
Making any of these other lists or dictionaries request writable wouldn't be too much trouble now, so if there is a use for it, speak up!
ISiteMapNodeCollection ChildNodes { get; } - Bad idea IAttributeCollection Attributes { get; } IRoleCollection Roles { get; } IMetaRobotsValueCollection MetaRobotsValues { get; } IRouteValueCollection RouteValues { get; } - Already done IPreservedRouteParameterCollection PreservedRouteParameters { get; }
Guessing maybe attributes would make sense, so from the controller we can add something like a "promotion=true" attribute ad random which puts a product in an ecommerce app in "promotion mode" when being rendered on the view. Not seeing much use for the others.
Well, it might make sense to make an an action filter attribute that changes the meta robots values in case you wanted to write them from the presentation layer.
I'll add those 2 in case they are needed.
As it's been a while since ASP.NET MVC 4 is out and seeing that ASP.NET Web API dropped dependencies on System.Web, I'm kind of looking at evolving MvcSitemapProvider in that direction too...
vNext would:
That's the technical side. What about features? I would love to hear what you are using today, what you would not want to miss, what's currently missing, anything!