Closed ivelin closed 7 years ago
The goal is to design and implement a reusable library that can collect and send stats usage on SIP or SS7 calls and (SIP or SMPP or SS7) messages as well as charging/billing events (diameter) going through a JVM to a reliable hosted service (that you would design as well). This library will eventually be part of all the main Restcomm projects
Awsome! I will start today!
I will study this item and return in case of questions.
Do you use a specific tool for the design?
Regards.
Ricardo Limonta
2016-05-03 9:14 GMT-03:00 Jean Deruelle notifications@github.com:
The goal is to design and implement a reusable library that can collect and send stats usage on SIP or SS7 calls and (SIP or SMPP or SS7) messages as well as charging/billing events (diameter) going through a JVM to a reliable hosted service (that you would design as well). This library will eventually be part of all the main Restcomm projects
— You are receiving this because you were assigned. Reply to this email directly or view it on GitHub https://github.com/RestComm/Restcomm-Connect/issues/376#issuecomment-216509573
Ricardo Limonta
We try to use PlantUML for design and Google Docs or asciidoc in general so far
Jean,
Currently the events from all stacks (SIP, SS7, SMPP, etc) are created via Log4j?
If so, I think that the collector could be a custom Log4J Appender, which receive these events and would send to the Stats Platform.
If we have different sources, I'll use a generic strategy for the collector.
@rlimonta that's correct. that may be a good idea but 3 points to consider:
Ok @deruelle , these points are very important!
We can create a specific log level (like a Level.STATS) and a specific appender (json format).
In this way you can decide in your stack which message goes to the central environment.
@rlimonta do you have pseudo code of how it would like and what the various projects would have to add in the code ?
@deruelle follows two examples:
//String Message logger.log(STATS, "string message"); result: { "logger":"org.mobicents.servlet.restcomm.sms.SmsService", "timestamp":"1376681196470", "level":"STATS", "thread":"main", "message":"string message" }
//Object Message (Eg. instance of SmsMessage object) //In this scenario it's possible to customize toString() method. logger.log(STATS, smsMessage);
result: { "logger":"org.mobicents.servlet.restcomm.sms.SmsService", "timestamp":"1376681196470", "level":"STATS", "thread":"main", "message":"[sms.toString()]" }
If this approach does not satisfy, I think of another strategy for the collector.
@rlimonta This seems to be a clean, simple and effective approach to me although I have one question, some of our projects are reusing others, by example, JAIN SIP Stack is used by SIP Servlets and SIP Servlets is used by Restcomm-Connect. If each layer is using this, we would have recorded 3 times the same call or message when we reach Restcomm-Connect layer, how would you prevent that ?
Another comment, if we adopt that approach, we should probably define a number of Utility Stats classes in a common library (https://github.com/RestComm/commons) that implement the toString that you're suggesting and represent the main possibilities : Audio Calls (# of them, # of minutes, ..), Video Calls, Messages (SMS, IP), Billing, ... so not every project create a different convention and it becomes a mess on the server side ?
@deruelle I believe it is better to think of a different approach. the STATS not only host system logs, but also data billing, call quality, SMS messages and others. It should be more flexible in the data structure.
I will design the collector as a API that can take any object (like a java bean) and transforms it into JSON, persisting in the central system.
In this way we will have a flexible statistics system that will be able to be used by any stack.
For the back-end we should have a local environment to host the data that is able to replicate these data to the cloud, right?
I'll finish the solution design today.
@rlimonta Thanks. Looking forward to the new design.
For the backend, I would like to have it hosted directly on cloud. As a first step we can investigate using Google Analytics as a potential solution to report back to ? We could then think of using platforms such as Openshift or AWS Lamba to host the backend as a second step or if Google Analytics is not possible/too restricting.
Feel free to share your thoughts on the backend as well.
@deruelle I was considering this point:
"[10:32am] abhayani: we have more challenges here. Most of the enterprise doesn't like direct internet connection to their servers."
So I considered primarily in having a local repository.
But if we consider the cloud as the first alternative, the solution is more simple.
@rlimonta good catch. Let's still proceed with cloud solution first to iterate faster and not overdesign. We can think of a local proxy for onsite customers later.
@deruelle Ok.
@deruelle I have created an overview of the structure of the Stats module and would like your evaluation.
https://docs.google.com/document/d/1WSuvZ68RXeTSOSW7bXF62vjLZagHu5n1TYLbhDwGLyQ/edit
I believe that Google Cloud Data Store and Elastic are appropriate because they operate under the schema less concept, essential to the requirement.
If you agree, I will continue working on the module design.
I'm sorry @rlimonta. I was not able to review the design yet. I have been a bit overloaded on other tasks. I'll try to review it this week. @gvagenas may have a look as well.
No problem Jean :)
Em quinta-feira, 12 de maio de 2016, Jean Deruelle notifications@github.com escreveu:
I'm sorry @rlimonta https://github.com/rlimonta. I was not able to review the design yet. I have been a bit overloaded on other tasks. I'll try to review it this week. @gvagenas https://github.com/gvagenas may have a look as well.
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/RestComm/Restcomm-Connect/issues/376#issuecomment-218757814
Ricardo Limonta
@rlimonta realized I don't have the rights to see it. just requested access.
@rlimonta I just commented on your comment. It's an interesting first draft but a bit too high level, I would like you to provide more details as described in my comments in the document. Also did you investigate using a solution such as https://aws.amazon.com/lambda/ ?
@rlimonta please provide me access to check the document also.
Thanks George
@deruelle @gvagenas Guys, I will improve the module design until end of day. I looked at the Amazon Lambda yesterday, and I believe we can use it. The idea is similar to google and elastic, but we can use as a first option.
@deruelle I made a more detailed analysis about the AWS Lambda, and we have two possible scenarios:
1) Implement handler components and process persistence tasks into the cloud. Pros: Improve throughput automatily. Cons: Need to pre-define our data types (POJOs). See: http://docs.aws.amazon.com/lambda/latest/dg/java-programming-model-req-resp.html
2) Implement the persistence directly to DynamoDB. Pros: Flexibility in the data structure. In this case the stats module would make the transformation of the data types (POJO) into JSON locally, persisting directly into DynamoDB. Cons: Process overhead.
I would like to know your opinion about it.
@rlimonta on 2) what do you mean by process overhead ?
I tend to favor 2) as if we need to move the data to a different structure it would be easier to migrate from DynamoDB to another DB let's say cassandra or whatever. Also Java Serialization for 1) is known not to be terribly efficient.
@deruelle The basic difference between option 1 and 2 are in the persistence processing environment. In option 2 the thread runs into the cloud, with elastic scalability.
In option 2, the threads will be run on the local server. In this case, we need to implement buffer mechanisms to decrease round-trips to the cloud database.
I think is a good idea implement asynchronous service to, what do you think?
My basic idea is implement connectors to enable the use of different databases like Mongodb, Cassandra, DynamoDB and others.
sounds good @rlimonta. looking forward to the second iteration of the design !
@deruelle Ok, thank you!
@deruelle I improved the document detailing the module structure.
https://docs.google.com/document/d/1WSuvZ68RXeTSOSW7bXF62vjLZagHu5n1TYLbhDwGLyQ/edit
I saw that other modules uses akka, so I adopted the same strategy.
I'm waiting your evaluation.
Thanks @rlimonta I commented further on the solution.
I'm thinking for the client side we can recommend or wrap http://metrics.dropwizard.io/3.1.0 for the most popular data we have ie SMS/Messages, Calls, Billing Events and send metrics regularly to the backend server.
Basically the goal is to have a simple library to start with that will gather number of calls, SMS and billing events and send it periodically to a central server. A website will use this information to display the number of total calls, SMS and billing events that went through all the Restcomm projects that integrated the library across the globe similarly to what you can see at the bottom of http://www.spinoco.com/ but for calls, SMS and billing.
@deruelle I studied the metrics library and I believe it is a good idea.
What you think on the central server?
It will be a our environment? or not?
@rlimonta I don't mind going with something that is not our environment, something like openshift may be good as well. The faster the better so we can start collecting community metrics as soon as possible. We can then iterate on refining the library and solution as we learn more.
@deruelle Ok Jean, I'll work on that idea right now.
@deruelle I implemented a client module using the Metrics library and commited into my github.
https://github.com/rlimonta/restcomm-stats
We need to define the strategy for the back-end. I like the idea of using OpenShift with MongoDB and Restful WebServices.
I would like your opinion about it.
After we define, I'll start this layer.
hi @rlimonta this seems like a good start. I'm still a bit unclear on how those will get transferred to the server side so I think if you can start to move forward with the backend design that would be great. Let's create another github project for that but using OpenShift with MongoDB and REST API sounds like a good direction to me to start with and iterate quickly.
@deruelle I´ll start the back-end.
I think it's interesting include additional informations into statistics, like Source IP Address, because our back-end hosts statistics from different platforms, right?
We I'll talk about it later.
We need to be careful, this can be seen as a privacy issue. We can push the country or/and city though, I think
@rlimonta just checking if you need more insights or if there is any blockers ?
@deruelle No, thanks. I will finish on this week. Do you have any preference about WebContainer for the back end? I´m using the Glassfish right now, but I can change.
@rlimonta yes let's use JBoss instead as most of our platform is using JBoss and it can run well on OpenShift.
Ok. thanks!
@deruelle I finished the implementation of client and core modules and committed on my Github.
I used a flexible strategy in order to include new attributes automatically, through Maps.
Now I'll work on documentation and web app to display graphics.
After that, I'll publish into the OpenShift.
@rlimonta I'm travelling right now so will not be able to review before last week of June. Do you want me to create separate github projects so you can push there ?
@deruelle That would be good! Thank you.
@rlimonta sorry about the delay, just came back from business travels. Do you want to provide the https://github.com/rlimonta/restcomm-stats project as a new folder in https://github.com/RestComm/commons ?
For the server side I created https://github.com/RestComm/statistics-service and added you as collaborator there, you can accept invitation at https://github.com/RestComm/statistics-service/invitations.
Were you able to make progress on documentation, web app to display graphics and OpenShift ?
@deruelle I'll finish on this weekend and publish on the OpenShift.
We need to define a unique id for each environment. Do you have any suggestion?
Thanks @rlimonta.
Can you describe what you mean by what an environment means here and why a unique id is required ?
@deruelle The stats module will receive information from various environments. If we want to have a vision for the specific environment, I think it would be important to have a unique identifier. Otherwise, we may forget this item.
@rlimonta Can we use the project name and version (ie Restcomm-Connect 7.7.0 by example) has a unique identifier and potentially include it also in the information sent ? This would allow us to break down per project and per version which system is sending more traffic and see the penetration of particular versions over others like Android and Apple are showing the penetration of particular versions of their OS
@deruelle Yes, it´s possible!
@rlimonta great let me know when ready then and when you pushed the code to the new repo
Related to #283
Related IRC discussion:
'' gvag: You mean the service that will collect the stats? [10:25am] gvag: I think we have to go with Graylog for that, or a similar application [10:26am] ivelin: yes, the cloud service [10:27am] ivelin: Graylog is the tool, but I’m also thinking about the live service [10:27am] gvag: I think Graylog, which is based on ElasticSearch but provides many nice features, is perfect for what we need [10:27am] gvag: Can you elaborate on what you mean live service? [10:28am] ivelin: something like stats.restcomm.com where we show a worldwide map of monthyl, weekly calls/messages served by restcomm [10:28am] ivelin: anonymous global usage data [10:30am] charles-r: ivelin, wouldn't some people object to us collecting data about their running Restcomm instance? [10:31am] gvag: I am not sure about this service and what will take to build it. We can discuss it with Alex or Orestis for them to build something on AngularJS. Because I guess we don't want to have Graylog in public, unless if we can customize it completely and we provide our own logo etc [10:31am] ivelin: charles-r , yes its optional [10:32am] abhayani: we have more challenges here. Most of the enterprise doesn't like direct internet connection to their servers [10:32am] abhayani: so may be an alternative could be to periodically print in logs [10:33am] abhayani: and as and when net is available push only these lines from logs to central server [10:33am] abhayani: just thinking about various possibilities [10:33am] abhayani: as we want to have same for other legacy products too [10:35am] charles-r: We could start by showing number of downloads and from how many countries in the world. [10:35am] abhayani: gvag, I am thinking if this should be separate project which can reused by any product [10:35am] gvag: abhayani, the monitoring services provided by Restcomm is a feature for enterprise monitoring, so customers can run their own elasticsearch or graylog or whatever monitoring server, to collect stats from Restcomm. On the side, we will collect anonymous call data for a statistics [10:36am] gvag: abhayani, what do you have in mind for that? The monitoring service, checks for Restcomm live calls by receiving call events etc, this is something very specific to Restcomm. [10:37am] gvag: These stats are exposed to any monitoring tool by the REST API, very Restcomm specific also [10:37am] gvag: I don't see any other way that can be used in other projects [10:38am] ivelin: the problem with downloads is that its hard to monitor nowadays [10:38am] ivelin: between torrents and other sources, its hard to tell [10:39am] ivelin: but call/message stats is something that I think will be more representative [10:39am] ivelin: despite all the limitations you bring up. [10:39am] ivelin: yes, some machines are behind VPN without any internat access [10:39am] ivelin: but most are online [10:39am] ivelin: we aren’t looking for a perfectly accurate number [10:39am] ivelin: just a ballpark representation [10:40am] ivelin: Do we handle tens or hundreds of millions of messages / calls per day ? [10:41am] ivelin: let me open a new issue for 7.4 [10:41am] abhayani: for legacy we have like 100's of millions of SMS per day [10:42am] ivelin: yes, so then overall do we handle 100’s of millions or billions? ''