IEEERobotics / bot

Robot code for 2014.
BSD 2-Clause "Simplified" License
18 stars 11 forks source link

Localization. #387

Open AhmedSamara opened 9 years ago

AhmedSamara commented 9 years ago

The 2016 competition involves a course that requires precise navigation, but does not provide any easy navigation methods like a line or path.

We need to get creative, so please dump any ideas that you have for navigation in this.

Current ideas:

PaladinEng commented 9 years ago

LIDAR slam

Sent from my iPhone

On Sep 1, 2015, at 11:17 PM, Ahmed Samara notifications@github.com wrote:

The 2016 competition involves a course that requires precise navigation, but does not provide any easy navigation methods like a line or path.

We need to get creative, so please dump any ideas that you have for navigation in this.

Current ideas:

Using OpenCV to position ourselves relative to QR codes, as in #379

Use Ultrasonics to determine X,Y co-ordinates relative to walls.

Use a mouse to keep track of co-ordinates relative to starting point.

— Reply to this email directly or view it on GitHub.

NicolasKiely commented 9 years ago

Just some thoughts, but initially start with three map models: two predefined annotated schematic versions and one built on the fly with sensor data. Discard one of the schematic versions when it fails to match what the observed data. Try to align the remaining schematic with the running model to tag the observed objects (eg truck, container, etc), and then later tag individual containers with the color/type when the QR codes are read.

SeanKetring commented 9 years ago

@NicolasKiely can you post that info you were telling me about on Tuesday? We were talking about the site with info on the packet and data from the LIDAR

NicolasKiely commented 9 years ago

I found some stuff here: https://sites.google.com/site/chenglung/home/xv-11-lds-v2-4-13386-fimware-data. From what I can tell, the device only captures data in the plane of rotation; it doesn't sample height data. It also samples in 1 degree increments.

NicolasKiely commented 9 years ago

From some back of envelope work, the device's range and resolution limits should be fine for identifying the competition landmarks. It's not precise enough for detecting and isolating individual cargo blocks though.

NicolasKiely commented 9 years ago

Okay, here's a 2d localization strategy the kinect team drew up tonight:

kennychuang commented 9 years ago

This proposal is very similar to the above post, with the addition of Template Matching. This is a draft I mocked up with the process and some pros/cons with this method. Anyone should be able to edit the document to add information to it.

This is of course not the perfect/best solution to the problem of localization, so other processes are much appreciated.

Link (if you missed it above): https://docs.google.com/document/d/1xgNy3eEPN6ruoWOiwoxyDTHthTB5dFY5_BsZzqixUks/edit?usp=sharing

Please version it to keep track of edits.

SeanKetring commented 9 years ago

We need to get a test reading of the course using the LIDAR. Its currently able to be powered from the current bot. The point of this is to verify that we get a useful image of the course using our LIDAR, and to understand what it is we will be seeing.

I think we should do this tomorrow evening.

To the LIDAR Team (Thomas/Duncan/Trey)

Can you help with this tomorrow night?

adwiii commented 9 years ago

From the LIDAR team:

We were going to start this as soon as we had the LIDAR properly mounted on the bot. The main issue is that we need to get the bot at the start of the meeting since setting up the field to test things is a bit of an involved process to see how good the data really is. We will be glad to work on that tomorrow. Other than that, we have been working on algorithms to convert the LIDAR data into usable (x,y) coordinates for the bot.

On Mon, Nov 2, 2015 at 2:47 PM, SeanKetring notifications@github.com wrote:

We need to get a test reading of the course using the LIDAR. Its currently able to be powered from the current bot. The point of this is to verify that we get a useful image of the course using our LIDAR, and to understand what it is we will be seeing.

I think we should do this tomorrow evening.

To the LIDAR Team (Thomas/Duncan/Trey)

Can you help with this tomorrow night?

— Reply to this email directly or view it on GitHub https://github.com/IEEERobotics/bot/issues/387#issuecomment-153137358.

SeanKetring commented 9 years ago

Ok so you are thinking the current mounting isn't workable? I know its upside down, which isn't ideal, but it does currently have a stable mounting.

Thanks for fast reply

adwiii commented 9 years ago

I want to test the data upside down and see how it looks before saying that. I more meant I didn't want there to be more pressing issues on the robot before we took it to take data. On Nov 2, 2015 2:57 PM, "SeanKetring" notifications@github.com wrote:

Ok so you are thinking the current mounting isn't workable? I know its upside down, which isn't ideal, but it does currently have a stable mounting.

Thanks for fast reply

— Reply to this email directly or view it on GitHub https://github.com/IEEERobotics/bot/issues/387#issuecomment-153139833.

SeanKetring commented 9 years ago

OK, you guys can commandeer the bot tomorrow

Sidd-GrizzAp3 commented 9 years ago

is there some sample data dump somehwere that people can play around with regression analisis or soemthing? i.e. taking raw data from the lidar, and trying to form walls/boundaries. I want to work on that.

AhmedSamara commented 9 years ago

@tvbarnette999 @adwii @dmpage318 do you guys have anything like what @Sidd-GrizzAp3 is talking about?

adwiii commented 9 years ago

This is what we have been working on at meetings, analyzing the data. We have been using FFTs in matlab/numpy for cross correlation, which works really well for the theoretical data we drew up based on what the map looks like, but has not gone so well with the actual data we took recently with the field out. We can look into finding a way to post the data we got on the field along with the samples based on the map itself in the next two days. It would probably be easiest since this is what we have been working on since we got the lidar together for us to talk over it in person on Tuesday and explain the data and how we have been using it. One of the keys is that even if we get the process we have been using in matlab/numpy to work reliably, it can take a long time to run.

On Sun, Nov 8, 2015 at 11:45 PM, Ahmed Samara notifications@github.com wrote:

@tvbarnette999 https://github.com/tvbarnette999 @adwii @dmpage318 https://github.com/dmpage318 do you guys have anything like what @Sidd-GrizzAp3 https://github.com/Sidd-GrizzAp3 is talking about?

— Reply to this email directly or view it on GitHub https://github.com/IEEERobotics/bot/issues/387#issuecomment-154920294.

AhmedSamara commented 8 years ago

I think we can solve a lot of this somewhat-elegantly by modelling the entire robot the way you would normally model an entire 8 DOF robot arm.

A full model of the robot is located here.

We're currently planning on using omni-wheels, as well as a rail to put the Dagu arm (6 degrees of freedom, all rotational joints), so I think we can easily model this as a giant robot arm with 2 prismatics and a rotational joint coplanar with the world frame (for the omni-wheels), another prismatic at the base of the robot for the rail, and then 6 more rotational joints for the arm itself.

I can't remember exactly how to calculate this, but I remember that for each frame you need a displacement from the previous frame. So our model would look like this:

Name Type Length angle
World frame
Omni-wheel Prismatic 0 ?
Omni-wheel Prismatic 0 ?
Omni-wheel Rotational 0 ?
Arm-rail Prismatic (Height of robot)
Arm Prismatic Height of rail

(fill in the rest of the arm)

@WillDuncan1 and @BrettGoldbach

We've been using Alwyns code and just porting that to python, but because it was originally written in Matlab it's also been extremely unmaintainable and hasn't been working out that well. I think things would be a lot easier if we either used:
Robot toolkit and attempted to compile that, or looked into other python robotics libraries. I things this way, it would make tying our senior design project into the rest of the robot a lot easier.