EdgeLab-FHDO / Edge-Diagnostic-Platform

MIT License
0 stars 2 forks source link

Skeleton and Samples #27

Closed theodorelu27 closed 4 years ago

theodorelu27 commented 4 years ago

Pull Request for Issue #20

I used Spark since we only need simple REST input output

This is v0 which includes:

Stilll missing the tests

Other considerations:

alyhasansakr commented 4 years ago

I see you did it as MasterInput and MasterOutput, now we need to have an example configuration where we use REST as input and output. Don't forget to write the required tests and let me know if you need any help.

theodorelu27 commented 4 years ago

@alyhasansakr I tried to add the new configuration as a separate file and in the MasterConfigurator Class I'll work on the tests tonight, but is unit tests enough? also I'm planning to add the code for starting the server internally, currently the server can be started by running the RestRouter's main class

alyhasansakr commented 4 years ago

We need end to end tests, you can look at the other tests for the Scenario class as an example (I think it is not merged yet). If you cannot automate it, then make a description of how to manually test is

theodorelu27 commented 4 years ago

@alyhasansakr I added a simple test using the Rest Assured library, the server would be started before the tests

theodorelu27 commented 4 years ago

@juan-castrillon after merging master into my branch, I've been failing the test for the ScenarioDispatcherTests on relation with OnGUIUpdate

org.junit.ComparisonFailure: expected:<[]helmChartExecution
m...> but was:<[GUIUpdateExecution
]helmChartExecution
m...>
org.junit.ComparisonFailure: expected:<[]helmChartExecution
m...> but was:<[GUIUpdateExecution
]helmChartExecution
m...>
org.junit.ComparisonFailure: expected:<[]helmChartExecution
m...> but was:<[GUIUpdateExecution
]helmChartExecution
m...>
org.junit.ComparisonFailure: expected:<...atchMakingExecution
[GUIUpdateExecution
]> but was:<...atchMakingExecution
[]>

There was a a conflict due to my branch previously still using the old configuration, is there any file that I might've missed during the merge?

alyhasansakr commented 4 years ago

@theodorelu27 How could I know if I can't see the merge commit, push it so we can see it.

juan-castrillon commented 4 years ago

The only files you should have are : src/main/resources/scenarios/dummyScenario.json src/main/resources/Configuration.json src/test/resources/dummyScenario.json

theodorelu27 commented 4 years ago

@alyhasansakr it's the previous merge on https://github.com/EdgeLab-FHDO/Edge-Diagnostic-Platform/pull/27/commits/d0fa8f5fac812ec39a8fcb803f7a864201e49327 I'd keep it in mind to include it if I ever run to similar problems @juan-castrillon I tried again and after several tries, it worked, The files were complete so it's probably just problem during execution, thanks for helping me to check it out

alyhasansakr commented 4 years ago

Your last commit didn't fail, even https://github.com/EdgeLab-FHDO/Edge-Diagnostic-Platform/commit/d0fa8f5fac812ec39a8fcb803f7a864201e49327 didn't fail because of that. That is why I thought you had a new commit. Also running the same test will not just give you different results (unless it is time dependent), anyway, whatever it was it is no longer important.

Now to the important stuff, make the changes and lets see how it looks.

theodorelu27 commented 4 years ago

I've moved the server starter to a Runner, but I'm still not sure if it's the correct way to implement it as a singleton Runner also for test purpose, I executed the command from RestInput, is it good enough implementation for now?

theodorelu27 commented 4 years ago

@juan-castrillon thank you very much for the feedbacks I've moved the implementation to its own thread that is managed in main, with the server started in its own thread, as well as updating the code to make it work better with Runnables

theodorelu27 commented 4 years ago

thanks for the feedbacks @juan-castrillon I've tried to refine the singleton component,

I'm having some difficulties with getting the output from the runner, as especially for GET requests, I think it would be ideal to be able to include the result in the response

so my current idea to make the Rest Input wait until the Output is available and returns it to the requester, but I guess this would also be against the Runnable behavior since Input shouldn't be handling Output as well.

any suggestions? @alyhasansakr

alyhasansakr commented 4 years ago

@theodorelu27 I don't understand what the problem is, REST input and output are independent of each other, Rest input is concerned with POST requests (with or without REST runner and REST router), and REST output makes data available for GET requests (not actually delivering data to the requester). There should be another component (that is the REST runner with help of REST router) that responds to the GET requests.

This is not an easy task, if there is a problem still, please call me as soon as possible to sort it out.

theodorelu27 commented 4 years ago

@alyhasansakr ah okay, I should have done this earlier, but I want to check if my design is correct

So we'd have a total of three components for the REST implementation:

I would redesign the tests to fit the agreed design as well

alyhasansakr commented 4 years ago

@theodorelu27 Software development requires a little bit of abstraction.

I agree with your description of RestRouter and RestInput but not RestOutput.

RestOutput is a MasterOutput, which means it receives input from one or more MasterInput, and thus is not limited to RestInput.

For example, the match-making algorithm (which is a MasterInput) needs to set the edge node server name for each client, then the client can use a GET request to retrieve the value.

The reason you are confused is that in the test case we send a POST request and try to retrieve the same information with a GET request, this is just one use case, you can't use it to derive the abstract description.

theodorelu27 commented 4 years ago

@alyhasansakr thank you for the feedback, yeah I think I got it mixed up because I expected that the POST should also return a direct answer just like in the Master Mapping Test

I tried to fix the design as well:

For the first version, we will include two sample implementation: POST /node/execute/{command} Request should contain an executable {command] Reesponse would return: 200: when command is succesfully passed to the Master to be processed (current implementation: put inside command variable so RestRunner can read it). As currently there are no default pre-check, the request is still considered succesful even if the command is wrong (e.g.; updategui instead of update_gui) 500: when exception occurs or pre-check fails

GET /client/get_node As a stub, this request would return a pop of the output queue from the MasterOutput Response would return: 200: when MasterOutput was able to provide the information or a proper empty response (e.g.: empty array, because there are currently no available nodes) 500: when exception occurs or pre-check fails

alyhasansakr commented 4 years ago

Good, that is correct. Re-request my review after you finish.

theodorelu27 commented 4 years ago

got it @alyhasansakr I'm thinking of moving the implementation of the URL tasks to a RestController, to separate that responsibility from RestInput and RestOuput, do you think it would be good or let's see if it got too big later since currently they only handle simple tasks?

alyhasansakr commented 4 years ago

no need to separate it now, we will see in the future.