Closed ghost closed 8 years ago
First, you have to know how to fork, clone, create a branch, push to your fork, create a pull request, etc.
Second, what are you interested in working on?
If it's something that can be done on the client side then there's a lot you can do without diving into the C++ code. I have a working Python wrapper of the CAPI that I use for easy iteration of client-side testing, so that's an easy place to get started.
If it's something that can only be done on the server side then you'll need to understand the design of the service. That's not easy at this point. In fact, we would appreciate it if someone would make documentation for that. If that's what you want to do then please continue the conversation in #36
@Imgonnawork Really any issue that has the "Help Wanted" label is a good place to look for a place to chip in. I also should to go back over the issues and see if there are some others that need that label, but there is a pretty reasonable list of stuff to do at this point that we could use help with.
If you are interested in Unity development in particular one thing that I wanted to do was port over my psmove-unity5 project over to using this service. It currently uses psmoveapi.
If you wanted to make a fork of PSMove-Unity5 and then work on making that plugin work with PSMoveService that would be a huge help. As cboulay mentioned, doing that will be a lot easier with the new client C API (as compared to the C++ API that's a bit more difficult to wrap into C#).
@cboulay Do you think the CAPI branch is in a good enough state to merge the current code in there back up to master? We can keep the CAPI branch around after the merge because I still want to do the C++ client clean up in there.
EDIT: Also thank you for the offer of help! That's very kind of you.
@HipsterSloth The CAPI isn't perfect. I've run into some edge-case bugs that I know we SHOULD fix, but they weren't a priority. However, when used correctly, it's quite functional and reliable. I can get samples from it at a high rate for hours on end.
That being said, I think we should switch all the clients over to the CAPI before we do the merge. Otherwise it'll be too confusing for people looking for examples. Until then we can do a little hand-holding for anyone that wants to get started on using CAPI.
@cboulay Sounds good. This coming weekend I can start in on converting the C++ clients.
You guys misunderstood me, I'm not good enough to do this kind of things, I will just study a bit of version control theory, fork psmove-unity5 and PSMoveService and try to learn some stuff... Also I will keep an eye on issues and try to solve other problems and reply if I succed.
Anyway..
@HipsterSloth If you are interested in Unity development in particular one thing that I wanted to do was port over my psmove-unity5 project over to using this service. It currently uses psmoveapi.
Is this something we really need? I'm working with unity and PSMoveService using the SteamVR plugin made for Vive, it works properly and also has many example for free in assets store (also a bunch of code to program interaction and stuff)... I'm using plugin because I use Unity 5.3.5 but in 5.4 Beta there is native HTCVive support and I suppose works fine also with PSMoveService.
Thank you :)
While it seems the vast majority of interest in PSMoveService is in a VR context, there may be people that want to use the PSMove without VR. Those people will need a native Unity plugin (or unreal plugin).
I suppose they could still use openVR but requiring SteamVR service is too much overhead for a simple non-VR game.
@cboulay didn't thought about it, yes would be useful...
I will geek a bit on this kind of stuff and see if I can do anything, good work.
@cboulay Hey quick question. I know the psmoveservice currently doesn't support tracking for more than 2 controllers, but do you think its possible? I have a possible idea for creating a Cheap DIY Mocap tracking solution using a modified version of your code. My theory is that since it is tracking the individual colors, you could create a mocap suit using RGB lights that could be tracked by a multitude of eye cameras. Thoughts?
It does support more than 2 controllers I think (I never tested as I only own 2). But the approach used here to get poses is quite different to the approach required by your idea.
A large part of PSMoveService's code-base is in the server-client architecture. This is probably useful to you at some point in the future, but I think it's a little too much overhead initially while you do rapid testing of your mocap solution.
Another part of PSMoveService that might be useful to you is the camera-management code. OpenCV does most of the work here, but we subclassed some OpenCV classes to provide a common interface to all cameras including the non-standard PS Eye (non-standard on Windows and Mac anyway, nothing was needed for Linux).
The other major part of PSMoveService is the controller-specific code. There's quite a bit of PSMove-specific code in PSMoveService, but it won't be of any use to you. The controller orientation comes from its IMU which you won't have (not in cheap DIY solution anyway). The controller position is determined by getting the position of its bulb. This is only possible because the bulb is of known diameter, and because it is a sphere so we know how it should project onto a 2D plane. Also, we only expect one contour of a specific colour, so we identify the largest contour as our sphere and this allows us to discard smaller contours that appear due to noise or imperfect filtering. None of this applies to arbitrary mocap LEDs.
There are some things that we do in PSMoveService that would apply to your situation, but this is mostly just calling OpenCV functions with appropriate parameters. The colour-filtering is mostly done in OpenCV. You could get LED positions in 3D space as long the LED is captured by at least 2 cameras using OpenCV's solvePnP.
So far this is pretty easy to do.
The real difficulties are in creating a skeleton from a set of labeled points, and in handling missing LEDs on a frame-by-frame basis. I think the correct way to do this would require implementation of a kinematics model of the body (e.g., your wrist can't be immediately next to your elbow, you can't go from completely extended to completely flexed in a single frame) that was robust to sparse input. This is a common-enough problem in academic research that I wouldn't be surprised if there are already open-source projects for doing things like this, especially from the Wiimote era.
The only difference between your solution and what people were doing with Wiimotes to track kinematics is that you could automatically label the LED-points based on their colour.
One more point, I don't think clear LEDs will work very well for colour-separation. I think you will need to get frosted LEDs or otherwise diffuse the light somehow (scotch-tape?).
@ebelingp I can confirm that PSMoveService can track at least four PSMoves. With your idea of Mocap (assuming I understand it) would it not be easier to use a kinect as that already generates the hole skeleton?
Awesome thanks for the reply! In terms of creating a skeleton I would think the easiest solution would be to have someone stand in the FOV of one of the cameras first, so then the software can measure the aproximate distance between the LED's(kind of like how the tracking matt works I assume). Then create the skeleton based off that with restrictions like you said(not being able to move your hand to your elbow, etc.).
The reason I feel this solution is better than the wiimotes is due to being able to track the LED's color you're not limited to a certain FOV if you have multiple cameras.
For the clear LED point, I would probably put some sort of ball(like the move does) over the LED to keep tracking consistant, and somewhat easy to implement based off of current software.
What would be your opinion on the LED color implementation. Going off of what has been already done I would assume you would need a different color for each individual LED. I'm thinking though it may be possible to use the same color for all the bulbs. My thought process is if the leap software can take an infrared image and convert it to an ingame skeleton with decent accuracy, it shouldn't be that difficult to locate specific points(represented by the LEDS) and convert it to a skeleton based on that.
Thoughts?
@zelmon64 Possibly. The problem with the kinect is the reliability of the tracking. I'm trying to create a mocap suit that could be used within games to emulate motion on the cheap. From my understanding I believe the Kinect works similar to the leap in that it takes an image and then attempts to create a 3D skeleton using post-processing. The problem that occurs with this is preciscion. By simplifying it down to tagging a skeleton to LED bulbs attatched to different parts of the body, I believe this would increase the prescision, and stability of the tracking.
@ebelingp The wiimote is nothing more than an IR-camera, buttons, an IMU, and all that with bluetooth communication. People that used the Wiimote for kinematics were just using the IR-camera part of it (and the IMU to get the camera orientation, if they were being super fancy). There is no reason you can't have multiple Wiimotes/IR cameras. In fact, you can even add in a DK2 camera or any other camera that's sensitive in the IR-range.
If you're planning on using the same colour for all your LEDs, there is no advantage to using visible light instead of IR. In fact, it's worse. Only use colour LEDs if you're going to take advantage of the different colours. This is done per-frame by applying different colour filters to the retrieved image.
In either case, the first step will be identifying & labeling the LEDs. I haven't looked at kinematic analysis software in a long time, but I think they get label of each tracker based on the minimum distance to the predicted location. e.g., tracker LF1 is expected to be at position X1, Y1, Z1, and tracker RF1 is expected to be at X2, Y2, Z2. Unknown tracker is found at Xi, Yi, Zi. new_tracker_id = argmin(norm(new_tracker-LF1), norm(new_tracker-LF2)). This will be made somewhat easier by using coloured LEDs because there will be fewer tracker-location combinations to check and because distances between LEDs of the same colour should be huge. The result is this approach will be slightly faster and more noise-tolerant than IR-only. (but visible light has a lot more noise than IR light)
Adding balls seems like a headache, unless you can source a ton of these for super cheap. Even if they're rigid, they'll probably be too small to get useful z-position tracking from a single camera at any reasonable distance to the camera so you're not gaining any consistency w.r.t. the current tracking method. One advantage that might exist with a ball is that it will stick out from the body a little bit so will have slightly more resistance to occlusion, but I imagine the amount they'll move and fall off your LEDs would be more trouble than the potential benefit. But I don't know. Maybe they'll also help keeping the LEDs from twisting away so that might be worth it.
I think you need to find some open-source kinematics analysis software first and then look through that before you decide how to proceed.
@cboulay Thanks that sounds like the right idea! I'll try and look at the software first and go from there. Now that I think of it I agree with you on the fact that the only advantage to using LED's instead of IR, is due to having higher accuracy by using different colors, compared to having to calculate which IR light corresponds to which position you're trying to map it too.
@cboulay I sent you an email essentially about how can I help the project. Did you receive it?
Also, I'm trying to identify (code wise) a few bugs that I'm having my in side. (like not having the controller to move smooth in the Steam, but it is perfect in the config tool) Even though I'm trying to fix it, should I still open a new issue related to the bug? I have been doing some tests (in the code) to try to narrow down the problem, but the thing is that I have a few questions about the matter and I would like to ask it in the proper place.
Let's move any further discussion on this topic to the google group.
Hi, my name is Adriano, I'm a student and I would like to thank you for your work. I would like to contribute to this project but honestly I can't atm, that's the reason I'm writing here, could you suggest me a way to study this bunch of code and try to figure out how things are working?
I know a bit of every programming language, I'm also studying Unity (that's the reason I found this git project), but I feel very lost trying to understand stuff is happening here! (The only thing I did is started reading every issues discussion and understand what is already working, what is in develop and what kind of problem users are facing)
Thank you! Adriano D.
p.s. Lot of time I don't write in English, hope is readable :p