Closed afonsocastro closed 1 year ago
I was encountering some problems importing the RobotiqHand class from the gripper package to this script that is found in a different package. I solve it following the steps in this site (https://roboticsbackend.com/ros-import-python-module-from-another-package/). This procedure was used in the lib package as well to make it accessible to all scripts within the project
I was not able to import the RobotiqHand class from the gripper package after all. In this version, the script is running with a copy of RobotiqHand and GripperStatusDTO while I can't fix this issue. However, this does not compromise the script's functionality.
Additionally, this script already has the behavior of starting storing the data in the beggining of the force's application and stopping saving data in the end of the force's application. The diagram explains the current scripts logic.
This behavior seems to be appropriate for our intent! I want to try it :)
I am feeling that this method will save a different number of values for each different experiment, am I right? Distinct applied forces for training will have distinct number of stored values... We should have this is mind for future developments.
Good work!
Also, I have a new idea of how we can start importing modules from different folders with no problem at all: We should mark the folder with the module of interest as a Sources Root.
In PyCharm Community we can simply:
Modules within that folder will now be available for import. Any number of folders can be so marked.
@Joel-Baptista let's discuss this topic later. I feel the need of standardize this importing modules question.
@afonsocastro I created a node called action_interpreter_node.py, which is equipped to take data in the same way as the experiments were made. I left 2 TODO's where it is marked where the training and the classification functions can be placed
Epoch 0: 19 / 60 => 31.67%
Epoch 1: 37 / 60 => 61.67%
Epoch 2: 43 / 60 => 71.67%
Epoch 3: 43 / 60 => 71.67%
Epoch 4: 42 / 60 => 70.00%
Epoch 5: 43 / 60 => 71.67%
Epoch 6: 53 / 60 => 88.33%
Epoch 7: 54 / 60 => 90.00%
Epoch 8: 48 / 60 => 80.00%
Epoch 9: 49 / 60 => 81.67%
Epoch 10: 52 / 60 => 86.67%
Epoch 11: 54 / 60 => 90.00%
Epoch 12: 56 / 60 => 93.33%
Epoch 13: 50 / 60 => 83.33%
Epoch 14: 56 / 60 => 93.33%
Epoch 15: 55 / 60 => 91.67%
Epoch 16: 56 / 60 => 93.33%
Epoch 17: 59 / 60 => 98.33%
Epoch 18: 57 / 60 => 95.00%
Epoch 19: 54 / 60 => 90.00%
Epoch 20: 54 / 60 => 90.00%
Epoch 21: 54 / 60 => 90.00%
Epoch 22: 53 / 60 => 88.33%
Epoch 23: 55 / 60 => 91.67%
Epoch 24: 57 / 60 => 95.00%
Epoch 25: 56 / 60 => 93.33%
Epoch 26: 56 / 60 => 93.33%
Epoch 27: 55 / 60 => 91.67%
Epoch 28: 54 / 60 => 90.00%
Epoch 29: 55 / 60 => 91.67%
Epoch 30: 56 / 60 => 93.33%
Epoch 31: 54 / 60 => 90.00%
Epoch 32: 56 / 60 => 93.33%
Epoch 33: 57 / 60 => 95.00%
Epoch 34: 55 / 60 => 91.67%
Epoch 35: 55 / 60 => 91.67%
Epoch 36: 55 / 60 => 91.67%
Epoch 37: 55 / 60 => 91.67%
Epoch 38: 54 / 60 => 90.00%
Epoch 39: 54 / 60 => 90.00%
Epoch 40: 54 / 60 => 90.00%
Epoch 41: 54 / 60 => 90.00%
Epoch 42: 54 / 60 => 90.00%
Epoch 43: 54 / 60 => 90.00%
Epoch 44: 54 / 60 => 90.00%
Epoch 45: 54 / 60 => 90.00%
Epoch 46: 54 / 60 => 90.00%
Epoch 47: 54 / 60 => 90.00%
Epoch 48: 54 / 60 => 90.00%
Trying to figure out the problems with the timestamps I found something. There might be a problem with the timestamps that where given by the driver nodes. The image prints the max timestamp of the 3 nodes, but that's irrelevant since they are very close. Notice that when the number of seconds changes, it occurs same error, which is the next timestamp being larger than it should be. For example, the second timestamp is 1657276314.9804497, the last timestamp with the 4 digit, and the next timestamp is 1657276315.804361.
Some notes about the data storage for learning. The way the timestamps where measured caused problems. These problems are exposed in the comment before this one. In the effort of solving this problem, a new approach was taken. Now, the timestamp is measured when the message arrives from the topic to the data_aquisition_node, instead of reading the timestamp on the header of the message. This adds an error to the timestamp measure of about 1 to 3 milisseconds, which is a precision degree that can be ignored.
I have some information about the normalize function used in the data which is from the sklearn. It is not ideal but it is not as bad. Analyzing by column, the function checks the number with the largest absolute value, and divides all number by it. This means that zeros remain zeros, negatives remain negatives and positives remain positves. This also means that the data doesn't go from -1 to 1 in every sample, however it is assured that there aren't any numbers larger than 1 and smaller than -1. See image for example
When analyzing the forces and torques in the gripper sensors, a relevant detail was discovered. The values of the sensors are not static when the robot is at rest, fluctuating in some sort of normal distribution. There were done various test in an attempt to quantify this lack of precision and understand its nature. The figure in this comment shows a frequency graph for all 6 values for one of the test. The tests took 10 seconds in a 100Hz frequency, to mimic the training scene. The red line represents the average of the values and the yellow lines represent the confidence interval of the average value at a 99.9%
I'll leave this image here as confirmation that the values of the matrices correspond to the notes taken in the experimentation day
Further required developments on data_acquisition_node:
The stored data should create its own file with a counter on its name: e.g., sample1, sample2, and so on. We will probably need to distinguish the training experiments with different poses. As so, I suggest saving names with this information: e.g., pos1_sample1, pos1_sample2, pos2_sample1, and so on. Each different gesture should have its on folder: e.g., push, pull, twist.