davidker / unisys

Master repository for new changes to drivers/staging/unisys and drivers/visorbus
Other
2 stars 1 forks source link

visorhba: queue_disk_add_remove(): dar_work_queue never initialized??[upstream] #37

Closed selltc closed 8 years ago

selltc commented 8 years ago

Source: Dan Carpenter dan.carpenter@oracle.com Mon 5/2/2016 5:53 AM

I must be blind. How does queue_disk_add_remove() work? Where do we initialize dar_work_queue?

See KanBoard-995

selltc commented 8 years ago

Dan is correct. I can't see how the processing of cmdrsp->cmdtype == CMD_NOTIFYGUEST_TYPE and it's associated process_disk_notify() and queue_disk_add_remove() could possibly EVER work. The only logical conclusion I can come to is that this code is never executed. I will do some rudimentary testing to try to convince myself this is true, then I guess I should just strip it out.

davidker commented 8 years ago

It has worked in the past, but we now have static configuration of disks, we aren't doing dedicated HBAs anymore. If we did we would have to bring this code back. Or some code like it. It was designed for the IOVM to discover a disk on the HBA and the propagate that change to the guest.

The only time it would come in to position now is if we have a guest that has a disk defined but not present during initial discovery and then it gets added after we have discovered the HBA, my guess is we probably would just tell the guest to reboot at that point.

selltc commented 8 years ago

Yeah, the logic in virthba looks ok, just NOT in visorhba. The problem with leaving it in visorhba is that now that Dan has highlighted it, if I leave the code there, I have to figure out how to test it after I fix it.

davidker commented 8 years ago

Sadly, my advice is get rid of it now. Or I guess we could come up with a dedicated HBA. Sorry for messing up the logic.

selltc commented 8 years ago

NOT a problem!

selltc commented 8 years ago

Ok I did commit c56f0c3f270c798ad44c3cd7dda099c4676c9a16 for this, which is checkpatch-clean and tests clean, but I just realized this conflicts with a patch that is currently in mid-air going upstream, which is commit 1b54049435869cafe6b6d274da2377534f9072cf for github #38. So we need to wait for that patch to land in in Greg's staging-next first.

davidker commented 8 years ago

Please feel free to still submit it to me. I have queues for patches, which I guess Greg doesn't know about. I'll send it when it is ready to go there.

selltc commented 8 years ago

I'll actually have to re-do this patch against a new staging-next base once it becomes available. So this will be lingering around for a while...

davidker commented 8 years ago

Tim, I've put your patches into upstream-next. Can you test to see if the new upstream-next has your changes and it boots? That should handle the inflight issues. And I'll hold the patches if they pass testing before we get the inflight patches resolved.

selltc commented 8 years ago

I verified that the patches you put in upstream-next look perfect, test cleanly (I re-did all of my unit tests), and are checkpatch cleanly. Thanks David! That I guess means we're all-ready to go should we get lucky and Greg accepts our current patchset as-is.

davidker commented 8 years ago

Thanks for the doublecheck! This morning we got information that greg put us in staging-testing, so we should be close to being able to send this up.

selltc commented 8 years ago

Yeah, it looks like it is your commit 9b0cbb951ec96b147f486ad4473957859fa749af (upstream-next) that we should submit for this soon. Thanks.

selltc commented 8 years ago

Greg committed to staging-testing, then to staging-next a few hours later: