Open GoogleCodeExporter opened 9 years ago
"Better usage guide" implies that there is a usage guide somewhere. I can't
find one. Did I miss a link somewhere? Also, I installed the VS10 msi but can't
see that it added anything to the VS environment.
I'm working on a project to provide allow us to rapidly deploy a hardware
interface for HW developers to test embedded systems. These systems can be
hosted on anything from a full blown Windows7 running .Net to linux (C++ or
mono) running on the equivalent of a washing machine CPU. The developers will
be working in .Net. I need to provide them remote access to the host. Protobuf
seems like the perfect solution. If the host is resource limited, it could be
running pb in native C++ and the clients would just need to choose the
protobuf-net serialization method.
I have a proof of concept started using .Net remoting channels with custom data
contracting encapsulating the HW registers. Now I want to get it working with
protobuf .Net. I assume I just need to provide the new Client/Server
binaryformatter classes but I'm at a loss as to how to get started given the
sparse documentation. My deadline for delivering the proof of concept is
rapidly approaching. Any pointers to documentation/examples would be greatly
appreciated!
Original comment by jpat34...@gmail.com
on 1 Jun 2011 at 12:34
The installer should add support for .proto files. Whether you need this or not
depends entirely on the context. Re BinaryFormatter / remoting - well, that
will *work*, but probably wouldn't be my first choice. However, if you *want*
to use protobuf-net within remoting, you simply implement ISerializable and
invoke Serializer.Serializer / Serializer.Merge (in the ctor).
If you can give more context as to what is running at each end, I might be able
to help more; protobuf-net is but a single implementation of protobuf, and is
aimed at .NET developers. If you are using .NET (I'm not entirely sure, due to
the "native C++" mentions) it should be pretty easy to throw together. Note
that it is a *serialization* API, not a *remoting* API (although interoperable
RPC stacks are mentioned regularly)
Original comment by marc.gravell
on 1 Jun 2011 at 1:04
Hi Marc - thanks for the quick reply. The idea is that the HW developers will
be using the framework our group is developing for algorithm development and
debugging. They will be using the provided API (that provides a high level
abstraction of the host's HW registers, PCI devices etc.) These are the
clients. They basically translate high level API calls to a series of
address/data pairs remoted to the host which writes them to HW. As the HW
engineers develop the algorithms necessary to get the HW to do its thing, they
pass them off to the software developers who are working in parallel to build
the host application. In other words, the client code will never ship so
performance is not a huge concern.
The server has to run on the embedded HW which might be a complete .NET4
platform, or a CF or even linux (hence the C++ reference above). Thus I need
the clients to be agnostic. If the HW is being deployed on a .NET platform the
client will use Remoting or WCF. If the host is resource starved the server
would run protobuf-net/CF or perhaps protobuf in native C++. The client API has
to adapt to the project's choice and place as few restrictions as possible on
the host software developers.
The problem now is that each project rolls its own development environment. My
group is trying to provide the infrastructure to make this process quick and
uniform. Bus transfer speed is not an issue here. The data to serialize will be
mostly address/int pairs that configure the HW. What is important is cross
platform interoperability and rapid deployment of the API. The idea is to
provide data contracts (in whatever flavor the project needs i.e. WCF contract,
proto file etc.) which model the various types of HW devices we need to access
and then configure client back end to use the selected serialization protocol.
The easiest way to accomplish this I think is to plug in the appropriate
serialization scheme into the .Net channel model. Better ideas would be
appreciated!
Thanks again,
Jeff
Original comment by jpat34...@gmail.com
on 1 Jun 2011 at 2:36
I would stay away from remoting, personally. Raw sockets or a basic http
payload (just POST the serialized binary) would be my suggestions. Easy to port
between random platforms.
Original comment by marc.gravell
on 1 Jun 2011 at 3:07
I'd request that the wiki be fleshed out with the following info (or if this
info is in source code, the wiki should say so)...
- A table of contents(which the "Getting Started" page should link to or
include at the end)
- Usage "without attributes". The "Getting Started" page still implies this is
not possible.
- How are shared references or circular references handled?
- How are references to "object" handled?
- How are collections handled, and can I change collection types between
versions? If I write "SortedList<A,B> Foo;" in version one and "Dictionary<A,B>
Foo;" in version two, they should ideally be compatible. Assuming collections
are detected automatically, what if I have a class that happens to implement
IEnumerable but I want ordinary field-and-property-based serialization?
- What happens when I change data types between versions? (hmm, I remember
reading somewhere in the protocol buffer documentation about how protocol
buffers handle various kinds of data type changes... wasn't able to locate that
info in a 2-minute search)
- [ProtoInclude(7, typeof(SomeDerivedType)] is an attribute on the base class.
Why can't this be an attribute on the derived class instead?
- When and how should one use RuntimeTypeModel?
Original comment by qwertie...@gmail.com
on 17 Aug 2013 at 6:01
Original issue reported on code.google.com by
marc.gravell
on 11 Feb 2009 at 8:04