LambdaConglomerate / x9115lam

2 stars 0 forks source link

Final Project #17

Closed aisobran closed 8 years ago

aisobran commented 9 years ago

We should have a proposal by Tuesday. During class we talked about adding novel heuristics to PSO. I think it's a great idea. We should also consider what models we want to choose. I suggest everything in moea problems. We could also use it to tune machine learning algorithms. This would be pretty straight forward as we could use sci-kit and a data set from kaggle.

Another alternative would be to code up gale and apply these models.

meneal commented 9 years ago

I'm good with that. I guess my only question when we actually talk to Menzies is whether rolling through all of the models in moea would be enough for him. If it is I think that's a totally straightforward way to roll to just get it all knocked out. I definitely like the idea of using PSO, do you think we even need to come up with novel heuristics, or would that be just sort of for sport?

ghost commented 9 years ago

I agree with using the moea problems with PSO, and I also agree that we should ask Menzies if we should try implementing some heuristics. We should focus on getting everything to work first and then adding heuristics only after that. In our proposal, we should mention that we are considering different heuristics for PSO and discuss with Menzies.

meneal commented 9 years ago

Did he even want us to write anything up?

On Sun, Oct 18, 2015 at 9:32 PM, Joseph Sankar notifications@github.com wrote:

I agree with using the moea problems with PSO, and I also agree that we should ask Menzies if we should try implementing some heuristics. We should focus on getting everything to work first and then adding heuristics only after that. In our proposal, we should mention that we are considering different heuristics for PSO and discuss with Menzies.

— Reply to this email directly or view it on GitHub https://github.com/LambdaConglomerate/x9115lam/issues/17#issuecomment-149073749 .

ghost commented 9 years ago

I remember him saying something like that last class. I think he just wants a couple of sentences on a piece of paper.

aisobran commented 9 years ago

Yeah we need some form of project proposal by Tuesday.

meneal commented 9 years ago

I just started a proposal in the final project folder. I wasn't sure what we wanted for the number of heuristics so I just put an X in as a placeholder. I used word since I was having some problems with Latex. El Capitan seems to have messed me up a bit. How many Heuristics should we have up there? Or should we just leave it blank and ask him about scope when we talk to him?

aisobran commented 9 years ago

Actually could we do markdown?

meneal commented 9 years ago

Totally fine with me. We can just delete that word file.

On Mon, Oct 19, 2015 at 11:46 AM, Alexander Sobran <notifications@github.com

wrote:

Actually could we do markdown?

— Reply to this email directly or view it on GitHub https://github.com/LambdaConglomerate/x9115lam/issues/17#issuecomment-149254440 .

ghost commented 9 years ago

I don't think we need to say exactly how many heuristics we want to implement.

Also, a quick question on the predator heuristic. Why would we have the predator keep the swarm away from the global best? It seems we should keep the swarm away from known bad areas of the space, right? Are we trying to avoid local extrema by doing this?

meneal commented 9 years ago

Yeah local extrema avoidance was my point with that.

On Monday, October 19, 2015, Joseph Sankar notifications@github.com wrote:

I don't think we need to say exactly how many heuristics we want to implement.

Also, a quick question on the predator heuristic. Why would we have the predator keep the swarm away from the global best? It seems we should keep the swarm away from known bad areas of the space, right? Are we trying to avoid local extrema by doing this?

— Reply to this email directly or view it on GitHub https://github.com/LambdaConglomerate/x9115lam/issues/17#issuecomment-149342626 .

meneal commented 9 years ago

I've printed the markdown that was up there for the final project to bring to class tonight. Do we want to change anything on it based on what we talked to Menzies about on Tuesday? I can reprint in the library if we need to change. Also if we're good with this we might as well send to Rahul and Menzies a link too based on Rahul's email from the other day.

aisobran commented 9 years ago

Sounds good to me.

ghost commented 9 years ago

I've copied our HW6 code to the project folder, so we can begin working on it. I know both of you are busy with data sciences right now, so I'll see if I can get some sort of PSO up soon.

aisobran commented 9 years ago

I just wanted to give you guys a heads up that I will code the moea problems using our framework while I'm flying. I should have them up by end of Friday.

ghost commented 9 years ago

Matt, are you planning on coming to class today? Menzies is doing the rounds with all the groups and I would rather have another group member with me.

On Thu, Nov 5, 2015 at 10:01 AM, Alexander Sobran notifications@github.com wrote:

I just wanted to give you guys a heads up that I will code the moea problems using our framework while I'm flying. I should have them up by end of Friday.

— Reply to this email directly or view it on GitHub https://github.com/LambdaConglomerate/x9115lam/issues/17#issuecomment-154085055 .

Joseph Sankar (About Me) http://josephsankar.me Master of Computer Science, Class of 2016, North Carolina State University Bachelor's in Computer Science, Class of 2015, North Carolina State University

ghost commented 9 years ago

I just pushed a really basic start of PSO. All the vectors are there, but they don't change right now because I'm not calculating the fitness of N.

If anyone is planning on coming to class today, please let me know. Otherwise, I'll stay home and work on this more.

On Thu, Nov 5, 2015 at 1:07 PM, Joseph Sankar jesankar@ncsu.edu wrote:

Matt, are you planning on coming to class today? Menzies is doing the rounds with all the groups and I would rather have another group member with me.

On Thu, Nov 5, 2015 at 10:01 AM, Alexander Sobran < notifications@github.com> wrote:

I just wanted to give you guys a heads up that I will code the moea problems using our framework while I'm flying. I should have them up by end of Friday.

— Reply to this email directly or view it on GitHub https://github.com/LambdaConglomerate/x9115lam/issues/17#issuecomment-154085055 .

Joseph Sankar (About Me) http://josephsankar.me Master of Computer Science, Class of 2016, North Carolina State University Bachelor's in Computer Science, Class of 2015, North Carolina State University

Joseph Sankar (About Me) http://josephsankar.me Master of Computer Science, Class of 2016, North Carolina State University Bachelor's in Computer Science, Class of 2015, North Carolina State University

meneal commented 9 years ago

I'll be in class.

On Thursday, November 5, 2015, Joseph Sankar notifications@github.com wrote:

I just pushed a really basic start of PSO. All the vectors are there, but they don't change right now because I'm not calculating the fitness of N.

If anyone is planning on coming to class today, please let me know. Otherwise, I'll stay home and work on this more.

On Thu, Nov 5, 2015 at 1:07 PM, Joseph Sankar <jesankar@ncsu.edu javascript:_e(%7B%7D,'cvml','jesankar@ncsu.edu');> wrote:

Matt, are you planning on coming to class today? Menzies is doing the rounds with all the groups and I would rather have another group member with me.

On Thu, Nov 5, 2015 at 10:01 AM, Alexander Sobran < notifications@github.com javascript:_e(%7B%7D,'cvml','notifications@github.com');> wrote:

I just wanted to give you guys a heads up that I will code the moea problems using our framework while I'm flying. I should have them up by end of Friday.

— Reply to this email directly or view it on GitHub < https://github.com/LambdaConglomerate/x9115lam/issues/17#issuecomment-154085055

.

Joseph Sankar (About Me) http://josephsankar.me Master of Computer Science, Class of 2016, North Carolina State University Bachelor's in Computer Science, Class of 2015, North Carolina State University

Joseph Sankar (About Me) http://josephsankar.me Master of Computer Science, Class of 2016, North Carolina State University Bachelor's in Computer Science, Class of 2015, North Carolina State University

— Reply to this email directly or view it on GitHub https://github.com/LambdaConglomerate/x9115lam/issues/17#issuecomment-154169693 .

aisobran commented 9 years ago

Added all the models from moea. I ran them all through SA with no errors and double checked the formulas once. It would be best if someone could review it because the translation could have mistakes.

meneal commented 9 years ago

Papers that I'm reading right now to get a grip on everything for our project on this:

So, The Off the Shelf article saw one of the most useful. It really presents PSO in an easy to understand form, and it probably will be pretty helpful in terms of setting up all of the different tunings for this thing. The last two are really just examples of what we're already trying to do, I haven't read them yet. I pretty much went through the Taxonomy paper and then just pulled out the papers that seemed most like what we are trying to do.

Neither of you by any means needs to read these papers, or definitely not all of them. I figured though at least for me it would help me think about what the best architecture would be for this whole setup. Scatter probably isn't going to be that useful. I haven't read it or the last two yet.

meneal commented 9 years ago

One other thing on this. We're going to have to do some sort of write up for this anyway, so any lit review we do is going to be useful when it comes to the write up. If either of you find anything else you think we should read, in Taxonomy or elsewhere, please post it. I'll read that too.

meneal commented 9 years ago

Read those last two papers. They are terrible. I'll just put that out there. Scatter PSO is really interesting though.

meneal commented 9 years ago

Here are a few things I'm thinking we'll need to decide to work on this thing. I'm going to get working on it and will implement stuff as occurs to me/is easiest. I'm totally happy to change things around as we go though. I'll post info on pushes that I make on this. Here are my thoughts before really getting rolling:

meneal commented 9 years ago

Here's another paper that gives an intuition about initializing velocity: Particle Swarm Optimization: Velocity Initialization

meneal commented 9 years ago

Just pushed some initial stuff on creating the cans. I've pretty much pulled everything else from the pso file. I don't totally understand the ramifications of using continuous domination yet so I'm going to read that Zitler paper so I can get a decent idea of what to do.

I guess the big thing is that we were using our from hell aggregation as our energy value, so now I guess we use continuous domination to create that energy value, but I just don't know exactly how that works after reading what Menzies has on the site.

ghost commented 9 years ago

Maybe one way to move forward would be to implement the basic PSO from Menzies' notes with the parameters as in the off-the-shelf paper and then add on as we need to. If we do this, then we can compare the results with the basic model with one where we implement predator-prey and local neighborhoods and see if we get better results. How does that sound?

ghost commented 9 years ago

Just pushed some really small fixes to pso.py. First I changed the velocity so that each element in init.pos would have its own velocity.

Going with the idea of initially having just a global neighborhood, I made L = c.pbest, although we might actually want to just make phi_2 = 0 for now.

I also added in the basic updates to velocity and position from Menzies' notes. I made np = 1 so we can just focus on getting the calculations correctly for one particle. You can see that right now the update is not changing anything because the particle is already at its best position. We can change this by giving it a small initial random velocity in each direction or by changing pbest to the results of a base runner.

Again, this is just the basic PSO implementation, since we want to make sure we have a solid base before we add the fun stuff.

Let me know if you have any questions or concerns.

meneal commented 9 years ago

I did decide to put the cans in PSO. I just made a class for them, but we can always pull that if we decide to go another direction. It just didn't make sense to put them in the state after I really thought about it. This way we keep the state more general.

If the new position doesn't match the constraints I'd think that we would need to do something like flip the velocity's sign on one or multiple variables and then recalculate. That's the first thing I could think of anyway. Do you have any ideas off hand?

My thought on this was exactly as you say, to create this with the simplest mechanism possible and then add in different elements to build it out (predator/prey, etc). The biggest thing for this right now I think is to get continuous domination working right and then be able to build out the rest of the basic framework. I'll read the cdom article today and hopefully be able to get something working as far as an implementation today or tomorrow.

ghost commented 9 years ago

I agree with keeping the state general and putting the PSO variables within PSO if possible. I notice that now we're not using state at all in PSO. Is this intentional, or do you plan on including it? I ask because if we want to include it, we'll need a way to bridge s in state.py to the can class in pso.py.

For matching the constraints, I think that would be one way to go about it. Without knowing anything about the constraints, I think doing something stochastic would be a good way to go for now. Maybe we could have a life or patience counter when doing this, because we don't want to keep messing with the candidate. If the candidate runs out of lives, maybe that means it's in a bad decision space? Does it make sense to keep the candidate in that general area anymore? Maybe there's something we can do to move the candidate into an area with better performing candidates where we don't have to retry as often. I'm not sure about it, just throwing some suggestions out there.

I agree with implementing cdom next. Menzies already has some pseudocode up there, but it doesn't look like it'll be that hard to implement. We'll probably just have to adjust loss to get the objective values. Let me know if you run into trouble and I'll have a closer look.

meneal commented 9 years ago

I do intend to add in the state. I'm really looking only at prelim stuff right now just to test out some of the machinery of this. S can just be a vector of cans, at least that was my intention.

As far as a candidate that doesn't meet constraints anymore, I'm really thinking that you just kill that candidate, and regen a new one with a new set of velocity and whatnot. That's prob the first stupid attempt to roll with.

I have cdom working in simulated annealing now. I just pulled everything energy related out of the sa code, in another file called sac. It's working, but its really hard to know if it's working correctly. The energy figures that we had before really gave us an idea of what was actually going on in some sort of understandable context, without those values we don't have metrics that we can use as easily.

For example: I ran both sa and sac against the osczyka and fonseca models and just pegged the energy value at 1.0 for sac, since with cdom it's completely a meaningless figure. I wanted to see if the final candidates were close to each other. But in reality they probably won't be that close to each other since it's a total frontier. I'm thinking at this point that we may well need to implement a hypervolume and spread metric so we can get a sense of what's actually happening and whether we are actually coming up with reasonable values. I suppose we could calculate energy by hand for what we have and check it that way, but we'll need some goodness metric later anyway. The values I got out of the two different types of optimizers are in a file called sacvsa.txt.

I've figured out how to include cdom in pso, I just want to make sure that it works before we get any further on what we're doing. Either of you have any idea how you would calculate spread or hypervolume?

Ignore test2.py. I just pushed it by accident. I needed some way to test the model changes I made.

meneal commented 9 years ago

Came up with a better idea than trying to figure out how to calculate hypervolume and spread right off the bat. Just added back in the mechanisms for calculating energy to sac.py, and used only cdom to actually optimize. I'm coming out with reasonably similar energy values for a final now. They're a bit different:

I would say that it's fairly likely that cdom just needs to run longer than from hell to come up with a smaller energy value. Also we're testing it on a metric that It really doesn't optimize for anymore. Still it at least gives me a bit of confidence that we have something that works.

I still think it's important to come up with a way to calculate hypervolume and spread though.

meneal commented 9 years ago

Made a bunch of changes to the code for PSO. There were a few mistakes. Random.uniform takes it's bounds as it's parameters. For example random.uniform(0,1) produces a random value in the range 0,1. I think it may have been typo, but we had random.unform having parameters of the values to compute which wouldn't have made sense. I killed the wider for loop and made both computations into list comprehensions. So now we basically loop through the candidates, and compute the a list comprehension for velocity and then for position.

I decided that cdom should probably live in the model. Main reason for that is that it actually calls a method of the model, and anywhere else we put it we're going to have to pass it the model, which seemed like a pain in the ass.

I haven't set up the constraints in PSO yet, mostly because I've forgotten exactly how to check them with the model. I figured when Sasha returns he can put that in.

As of my most recent push we have a loop that updates all of the cans, and a loop that runs cdom against all of the cans, updating global best and the personal best for each can. I haven't set things up to work with the state object yet, but I will after it's running with the constraints. I haven't made any moves on the fitness calculation yet, but I think we may have a meeting with Menzies in our future figure out a good metric to use.

meneal commented 9 years ago

So there were a bunch of bugs in this. But I think a good number were fixed this morning:

The first was obvious, but the second was really hard to fix. Anyway, here's an idea of where we're at now:

Since I've fixed cdom, heres how its working: We select the first can in the list and then run it across all of the candidates in the list and output the winner of that run. We then continue doing that for all of the cans in the list, basically to see if we end up with different cans that dominate at the end of each run through.

So what's interesting is that you can see that the best id is different for many of the rounds in this output:

=======================
BEGIN DOM PROC K:  192.0
=======================
ZERO DIFFERENCE IDS: 0 15, diff: 0.008
best id after run  0
ZERO DIFFERENCE IDS: 1 27, diff: 0.043
best id after run  26
best id after run  29
ZERO DIFFERENCE IDS: 3 18, diff: 0.074
ZERO DIFFERENCE IDS: 3 21, diff: 0.021
best id after run  29
best id after run  29
best id after run  28
ZERO DIFFERENCE IDS: 6 23, diff: 0.017
best id after run  28
ZERO DIFFERENCE IDS: 7 17, diff: 0.087
best id after run  28
best id after run  29
best id after run  29
best id after run  29
best id after run  26
best id after run  26
best id after run  29
best id after run  29
ZERO DIFFERENCE IDS: 15 0, diff: 0.008
best id after run  0
ZERO DIFFERENCE IDS: 16 26, diff: 0.074
best id after run  26
ZERO DIFFERENCE IDS: 17 7, diff: 0.087
best id after run  28
ZERO DIFFERENCE IDS: 18 3, diff: 0.074
ZERO DIFFERENCE IDS: 18 21, diff: 0.064
best id after run  29
best id after run  26
best id after run  29
ZERO DIFFERENCE IDS: 21 3, diff: 0.021
ZERO DIFFERENCE IDS: 21 18, diff: 0.064
best id after run  29
best id after run  29
ZERO DIFFERENCE IDS: 23 6, diff: 0.017
best id after run  28
best id after run  29
best id after run  29
ZERO DIFFERENCE IDS: 26 16, diff: 0.074
best id after run  15
ZERO DIFFERENCE IDS: 27 1, diff: 0.043
best id after run  26
best id after run  27
best id after run  28
final best id  28

I wanted to control for the fact that they may well have one of the two following features:

In the output I have printout for when particles are in the same position (But not some epsilon of the same position, which may be the case), and I have printout when they are within an epsilon of the same loss value. Notice ZERO DIFFERENCE. That line first indicates the two IDS that it has decided have a zero difference in loss and then indicates what that difference actually is. The epsilon level is set to 0.1 for now.

I'm not totally sure what's happening yet. Just thought I would post some results to what I'm looking at right now.

Also note that at the end of the output I'm indicating the number dead/percent attritiion. That value indicates the number of particles that die during the run. Death occurs when they either are outside of bounds, or outside of constraints. There are a number of other ways to handle out of bounds and out of constraints. This is just a first shot at it. If you want to scan through all of the output, it's in console.txt for now. I'm not using the state logger yet. I know we'll need to make some changes to make that work and I haven't had time to look at doing that yet.

One other thing: after talking with Sasha a bit yesterday, I think that we should just use energy for a metric for now and then integrate more interesting metrics after talking with Dr. Menzies in class on Tuesday. I'm concerned about scope, but I think we can get more done before we speak to him. Also I'm sorry about the number of print statements that are either in the code or commented out right now. I'm just actively debugging.

ghost commented 9 years ago

I've added energy as a metric and the energy of the best solution is added at the end. The energy values are in the right part of the energy spectrum but they look a bit high to me.

I also made a slight change on line 164 where we decide whether we are dominated by another candidate. If I understood Menzies correctly then just checking that we do not dominate another candidate does not imply that the other candidate dominates us. So I added another check, so we now only update the best when can dominates c and c does not dominate can. Let me know if I'm misunderstanding this part.

I think we have a good framework up now, and we have to start tweaking different things to get lower energy values. We'll probably also need to have more detailed output like SA. It's hard to represent this data in text form because this is really positional data that should be drawn somehow. Anyone have any ideas?

meneal commented 9 years ago

So for your first comment: Energy for our setup is determined dynamically by the seen values for each decision over the entire runtime of the rig. The code checked in only initializes the maxs and mins, but doesn't actually update them during run time. This is really part of the reason I wasn't sure I wanted to implement energy for our setup. Basically we need to either update energy at each step for all of the cans, or we could potentially update max/min for some subset of the cans. I hadn't completely decided on which way to roll, but updating max and min for 30 cans every iteration could get really expensive. I'll set that up this afternoon when I can get home.

Second comment: In fact this second check is embedded in the code for cdom. The diff amount I'm outputting is actually the difference between the two checks. Take a look in the code for model and check to see if you're satisfied.

Representation is a whole other thing. I'm not totally sure yet what to do with that.

ghost commented 9 years ago

I do update the max and mins on line 137, but I see what you're saying about it getting expensive. I'm not sure how else to do it. All we are doing is a series of comparisons, and array accesses/sets. And a size of 30 is not very large. If our model needs to scale, then I think we would need to optimize a bit more. But for now I wouldn't worry too much about it as long as it runs in a reasonable amount of time on our machines.

I had a look at cdom and you're right. The calls to loss take care of both sides. However, I would stick with the first part of the if-statement instead of the negated part since it is easier to understand.

aisobran commented 9 years ago

You have to run update at every iteration with updateObjectiveMaxMin or the energy calculation is pretty much invalid. It's not too expensive, O(number of objectives * steps * candidates) which has strictly lower complexity than the optimizers which is O(number of objectives * steps * candidates^2). Since we are not even increasing the complexity by constant by including it, expensiveness is not a good reason to avoid it.

Even if we don't use energy in the optimizer we should still updateObjectiveMaxMin because we want valid energies for the final results when we are comparing optimizers.

meneal commented 9 years ago

Joseph: You're right about updateObjectiveMaxMin. You do have it in the right place! Sorry about that. I think we have we what we need there.

Sasha: You're right.

I fixed another weird bug with best. There was a problem where we were getting different bests in the same iteration. That's done. There is still something happening where the particles are very similar in terms of loss. But I'm not satisfied that I understand when or why that's happening. I'll mess with it some more though to try to figure it out. There certainly are a large number of deaths for Osyczka2. Almost all of the particles die, that's not that surprising though I guess.

For the main PSO I've turned off a lot of the output, just to see what we get in terms of energy. Looking at .22 for Osyczyka and .69 for Fonseca. We can switch those out for whatever. I just didn't have time to try more than that for the moment. That's just 500 changes and no retries.

You can see in this extra little file comparison_ids.txt where you can see the correct thing is happening in each iteration in terms of domination.

aisobran commented 9 years ago

Yeah I'm thinking dying when outside of bounds/constraints is not going to be a good heuristic.

I added a simple graphing capability so we can see where the particles are going. This may help us understand the movements we're implementing. I'll add color for each particle, just need to flesh this out a little more.

Also, this grapher will only work with models with two decisions because it's only two dimensional. It will work for more but it will just ignore the any decision past the first two.

I ran tanaka and the behaviour looks very clumped.

I'll implement a graph of the objectives as well.

aisobran commented 9 years ago

Also you'll need matplotlib

aisobran commented 9 years ago

How far are these particles supposed to move because I'm seeing some huge jumps?

meneal commented 9 years ago

I totally agree, it's not a long term heuristic. Menzies suggests what I have in a comment in the code. Reset the position of the particle to max or min depending on it's current position, set the velocity to zero, then the obvious thing is that it will be pulled back in to the search area by the global optima. I also have inline that we can try either wiping the memory of the particle of its best or leaving its best. I just wanted to get to the point that we're at now. There were a bunch of bugs.

I haven't looked at the output yet, but a graphical look at this will be super helpful. As far as large jumps, you have to keep in mind that velocity is not capped. The constriction factor should restrict velocity, but from what you're saying it isn't doing a great job of that. :) Menzies suggests clipping to xmax (the bounds for that particular variable) for each velocity, but then kind of dithers from what I read and talks about the constriction factor supposedly helping. For now there's no vmax, can totally add one though if you want.

aisobran commented 9 years ago

Added unique colors for each can.uniq.

aisobran commented 9 years ago

Yeah that makes more sense. It may not be that the jumps are huge it's just that there are so many deaths that it looks very random. Maybe we I could track death and then give it a unique color.

meneal commented 9 years ago

Dude, mad props on these graphs. They're really interesting. Do you have a link to the documentation you used to set it up?

meneal commented 9 years ago

btw: Interesting results on these runs actually. We're getting pretty low global energy values on some of these. Tanaka in particular is really low. There's such a wide variance in terms of the attrition percentages too. some are super low some kill almost all of the particles.

aisobran commented 9 years ago

I'm just using the matplotlib docs.

I added resizing of the dots based on the energy of the vector. Right now, the bigger the dot the better the vector. After seeing the output it looks pretty good. All the clumping of the particles is at the best energies. I'm going to reverse this scaling(the smaller the dot the be better the energy) and see the resulting output. This may give a better presentation.

aisobran commented 9 years ago

I inverted the relationship so smaller energies (better candidates) are smaller dots. I think this looks better.

I also added a graph for energies. Interesting graphs indeed.

After graphing the energies, I've notices a bug in the code. The objectives are not normalizing between 0 and 1. I'm trying to troubleshoot this right now. Something is not adding up as none of the energy calculations are exceeding these bounds. My initial guess are either the model initially passed in to grapher is copied instead of referenced later so all updates to maxmin are not incorporated (I don't think this is how python works), the max min implementation is off (but if this was the case we'd get some really high energy values), either some vectors are not being checked in to max min tracking, there's some roundoff error occurring, or the normalization calculation is incorrect.

For sanity check, normalization should be (x - min)/(max - min)?

meneal commented 9 years ago

For the copying vs referenced later part you can actually use the id function and print out the response inside pso and inside grapher. The id function returns a unique identifier for each object. Could help in debugging. I just went ahead and did that. They're both the same unique id:

id in pso  4333997392
id in grapher:  4333997392

I don't want to mess with your mojo, so I won't make any changes, but It could be that we cull the particles before we update the objective max/min on line 137. It could be that if we update objective MaxMin on line 117 we may come closer.

That said, the sheer number of hits outside of the range for Tanaka and Srinivas make me wonder if that could possibly be the cause. The tanaka ones are super messed up for some reason, we have a large number of energy values below zero and at least a good number over one. Also have a good number that are running negative and above one on Srinivas.

Per here it looks like the normalization formula is correct.

aisobran commented 9 years ago

Just to clarify you mean we have "a large number of objective values below zero"

If the energies are below zero than we have a bigger problem as they could never be below zero the equation is sqrt(x^2 + y^2 + z^2 ...)

meneal commented 9 years ago

Yep, objective values, my bad.