Megabytemb / google-gdata

Automatically exported from code.google.com/p/google-gdata
0 stars 0 forks source link

Object Model makes no sense #543

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
I'm using the Google Content API for Shopping .NET.  Aside from the 
 verbose name and the horrible documentation, I'm shocked by the poor 
 architecture, especially coming from Google.  The folks at Google 
 should learn something from the .NET Framework.  They should learn how 
 to elegantly name, architect, and document their object model.  Doing 
 this well, yields better adoption and less documentation, since the 
 object model becomes more self-explanatory. 

The whole point of this API should be to hide the ugly gritty details 
 of constructing and submitting the XML document.  The developer 
 shouldn't be concerned with the structure of the underlying XML. 
 Instead, an object model should be designed towards simplifying the 
 developer's life and should fit within conventions of the native 
 development context.  Oddly enough, the API's object model achieves 
 neither goals. 

Take for  instance the "NumberToRetrieve" property name.  It doesn't 
 correspond to the underlying "max-results" XML attribute, nor is it 
 properly named for what it does.  Ideally, it should be named 
 "MaxResults" and should throw an exception if it is set higher than 
 250, which is the maximum Google's servers will return. 

It is also unclear where in the object hierarchy you're supposed to 
 set attributes. Let's look at how to perform a batch.  You create an 
 AtomFeed object and assign it an URI, you then add your ProductEntry 
 objects and assign each one a GDatatBatchFeedData object specifying 
 its operation.   You also assign a GDatatBatchFeedData object to the 
 parent AtomFeed object.   Finally, you execute the Batch method of the 
 Service object with the AtomFeed object and the URI as parameters. 
 Why all the duplication of setting GDataBatchFeedData and URIs.  Why 
 not do the following: 

Service.Batch(productEntriesToInsert, OperationType.Insert); 
 [where productEntriesToInsert is an IEnumerable of ProductEntry and 
 OperationType.Insert is an enumeration] 

The object model should hide this unimportant details like the URI in 
 the aforementioned example.  Here's another example: 

entry.Title.Text = "Product Name" 

instead of: 

entry.Title = "Product Name" 

Why the indirection? Is the API following the XML structure?  If I 
 want to access the nitty gitty details of the XML Title attribute, I 
 could construct the XML by hand. 

There are many more examples, but I won't bore you with all the 
 details. Am I correct in my interpretation here, or do I not get it?

Original issue reported on code.google.com by joebeaze...@gmail.com on 4 Oct 2011 at 8:50

GoogleCodeExporter commented 9 years ago
Hi Joe,

The client library for the Content for Shopping API follows the design choices 
that were made in the past for a generic Google Data API client library.
Obviously, being very generic is sometimes orthogonal to being very usable and 
I understand some of your concerns about the architecture, but at the same time 
I find it hard to skip using the existing codebase and ignore consistency at 
all when supporting a new GData-based API.

Let's consider the NumberToRetrieve property. You say (correctly) that the 
library should abstract the user from the underlying protocol, then why should 
it be called MaxResults and match the corresponding query parameter? What if 
other APIs use the same query parameter (e.g. max-results) but call it 
differently in their protocol documentation (e.g. MaxResults and 
NumberToRetrieve)? Would you have different properties mapped to the same 
implementation or instead have a property in the base class that both can 
inherit and use?

I also want to remind you that this is an open-source project and several 
non-Googlers have already contributed to it with patches or new libraries. If 
you want to help us improve this library, your contribution will be appreciated.
Thanks

Claudio

Original comment by ccherub...@google.com on 4 Oct 2011 at 9:54

GoogleCodeExporter commented 9 years ago
Hi Claudio,

Thank you for your response. I would like to propose an architecture that would 
give you the best of both worlds.  You can generalize functionality across 
APIs, yet make some things specific to the particular API.  Already, there's a 
reasonable attempt at doing this; however, it could be done better, while 
making the documentation and coding process far easier.  There's a delicate 
balance between generalizing and specificity.  As a rule, be general about the 
functionality, but be specific about the data.

In terms of being specific about the data, the current object model does a 
reasonable job, but it comingles functionality in the same object.   If you 
consider the AtomFeed as a transport for the data rather than the data itself, 
the ideal architecture becomes clearer.  Already, there's a MailEntry, a 
ProductEntry, a BloggerEntry, etc.  The data is different, but they are all 
serialized into AtomEntry XML items.  The object model should not only shield 
the developer from the specifics of the AtomFeed and its associated protocol, 
but should hide functionality.  Currently, the Entry items have a 
GDataBatchEntryData associated with them.  While it does make it easier to 
serialize into an AtomFeed, this comes at a cost of inflexibility and lack of 
usability.

In terms of generalizing the functionality, I would generalize the Service 
object, but specify the connection data, like so:

GoogleContentForShoppingConnection connection = new 
GoogleContentForShoppingConnection ("username", "password", "account id");

GoogleDataService service = new GoogleDataService(connection);

service.Batch(myInsertBatch, OperationType.Insert);  // serializes into an 
AtomFeed and submits it

I would remove the Feed object, since it provides no functionality that isn't 
provided by the native development context:

List<ProductEntry> myInsertBatch = new List<ProductEntry>();

// add items

foreach (Product product in products)
{
    myInsertBatch.Add(ConvertToProductEntry(product));
}

List<ProductEntry> batchCopy = new List<ProductEntry>(myInsertBatch);

These are all straightforward architectural changes that require minimal 
refactoring, while increasing programming fluidity.  Before I discovered this 
API, I considered using getting a copy of the official AtomFeed XSD and running 
xsd2code on it (see http://xsd2code.codeplex.com/).   This would provide the 
basic XML plumbing work for me.  I would simply construct the object, fill it 
in, serialize it, and then submit it. 
I wouldn’t mind volunteering my time to this project, how would I go about 
doing so?  Finally, are you willing to make a major architectural change?

Original comment by joebeaze...@gmail.com on 5 Oct 2011 at 1:45

GoogleCodeExporter commented 9 years ago
Joe, how does your Connection model handle different authentication mechanism, 
for instance OAuth 1.0 (supported) and OAuth 2.0 (not implemented yet)?

I like the idea of making batching more straighforward, I don't think we need 
to generalize the Service object to do that, though. We can simply change the 
way batching works and leave the Service implementations as they are.

To be clear, my general concern is that we might affect other libraries with 
structural changes.

Also, Google is moving to a new API infrastructure 
(http://googlecode.blogspot.com/2011/05/google-apis-discovery-service-one-api.ht
ml) and new APIs are going to be based on it. For those APIs there's a new .NET 
client library (http://code.google.com/p/google-api-dotnet-client/) and that is 
were we plan to focus our efforts. What I'm trying to say is that at this point 
it may be not worth going through a major architectural change for this library 
and perhaps you might want to check the architecture of the other and try to 
influence its design while it is still in the early stage.
Of course, if you want to devote some of your time to this project, that will 
be appreciated, I'm just trying to set the correct expectations.

Original comment by ccherub...@google.com on 5 Oct 2011 at 4:34

GoogleCodeExporter commented 9 years ago
Claudio, 

Thanks for the suggestions and your valuable time.  As I was writing this 
response, an idea came up for an improved object model.  My former proposal may 
be more usable, but it is still not .NET centric. I suddenly was struck by the 
idea of writing custom data providers using ADO.NET's existing plumbing.  I 
searched the web to find if anyone has done this and to my surprise a someone 
already has.  I don't know if it is worth pursuing this, but I think they have 
the right idea.

Original comment by joebeaze...@gmail.com on 5 Oct 2011 at 6:07

GoogleCodeExporter commented 9 years ago
Marking this as obsolete, feel free open a new entry to propose your patch.

Original comment by ccherub...@google.com on 27 Dec 2011 at 4:30