rjagerman / glint

Glint: High performance scala parameter server
MIT License
168 stars 67 forks source link

How to try it out in cluster env like YARN #41

Open timyitong opened 8 years ago

timyitong commented 8 years ago

Hi Rolf, I saw your comment in ticket: SPARK-6932. I'm excited and very interested in this project. I seriously believe Parameter Server is an essential piece to further scale ML algorithms into super high dimensional space. Do you have any examples of how to try this project out in YARN?

Many Thanks!

Yitong

rjagerman commented 8 years ago

Hi Yitong,

Thanks for your interest in the project! I have unfortunately not tried this with YARN, and I personally have never used YARN before. I do think it would be very interesting to get it to work with YARN, so I'll read up on it and see what needs to be done to make it work. However, given the complexity of something like YARN, and my total lack of experience with it, I can't give an accurate time estimate on when this will be completed.

If you find the time yourself to make it work with YARN, I'd be more than happy to look at pull requests as well.

batizty commented 7 years ago

HI @rjagerman Not sure you noticed me before, and I have used glint as main parameter server tech solution for a while, and it works well.

And later I will put out some modifies on glint, including the yarn patch which will ensure the glint could be run on yarn like a yarn application.

And I have a failover problem. When I push a lot of data at the same time (10billion vector), the pull or push will timeout and the paritialServer will cashed silence. I am working on adding an failover method for partialServers and have you any idea about failover?

These days very busy because I am working on an Internet Corp, when I get rest, I will fork glint codes and add my patch carefully, could you please review it? Thanks.

rjagerman commented 7 years ago

Hi,

And later I will put out some modifies on glint, including the yarn patch which will ensure the glint could be run on yarn like a yarn application.

Thanks, that sounds great, I look forward to it! Just submit the pull request when you are ready and I'll review it.

And I have a failover problem. When I push a lot of data at the same time (10billion vector), the pull or push will timeout and the paritialServer will cashed silence. I am working on adding an failover method for partialServers and have you any idea about failover?

Failover is something I'm very interested in, but the underlying problem of the crashes is much deeper. Since we use Akka's message-based communication there is no real back pressure and it's very easy to keep sending messages causing the parameter servers to eventually fail. To solve this problem, we need to somehow limit the communication which requires some form of synchronous blocking to stop the client from sending too much stuff.

I previously used some very crude heuristics which limits the number of open pull/push requests to a fixed amount via a semaphore (like in GlintLDA). This does stabilize and prevents crashes but also reduces our CPU utilization because we are blocking/waiting at times. Tuning the optimal number of open requests remains very problem-specific and hardware-specific and is a nightmare.

I'm currently looking into the new version of Akka remoting (codename Artery). The performance gains should be quite substantial. In particular this is interesting for the use-case of back-pressure:

Isolation of internal control messages from user messages improving stability and reducing false failure detection in case of heavy traffic by using a dedicated subchannel.

Upgrading Akka to this new remoting does require some fixes to protocol compatibility and serialization that I will have to look into.

batizty commented 7 years ago

Hi @rjagerman, I just have finish a very simple yarn application which could running glint on it, and Later, I could share some performance data about glint on huge vector, because we already have used it as main parameter server solution. And it works well.

Thanks