A robot arm control library with functions of instruction interpretation, object recognition, arm motion planning and execution.
Ubuntu 18.04.1 LTS
python 2.7
ROS Melodic
ros_control
moveit!
gazebo_ros_pkgs
Pointnet
pyopenssl
MQTT
tensorflow 1.0
meshlab
clone to your ros catkin workspace and build the package
cd ../catkin_ws/src
git clone https://github.com/scottlx/Wheelchair-Arm-Control.git
cd ..
catkin_make
source setup.bash
source ../catkin_ws/devel/setup.bash
urdf files, mesh files, rviz model visualize launch file, gazebo launch file.
Visualize the arm model in rviz:
roslaunch my_arm display.launch
Spawn the model into gazebo:
roslaunch my_arm gazebo.launch
Config file generated by moveit setup assistant
play with the motion_planning plugin in rviz
roslaunch arm_moveit_config execution.launch
A python execution script which integrates the whole system
python MQTT_sub.py
place a kinect in the gazebo Environment
get raw point cloud data and preprocess (seperate the data into small batches and do normalization etc.) Here is the original point cloud data
feed the preprocess point cloud data into the pointnet Here is the point cloud labeled by different colors:
get the location of interested object according to labels' of points
1.Go to Alexa developer console to create a new skill https://developer.amazon.com/alexa/console/ask 2.Go to A mazon Web Service to create new Lambda function and ioT service https://console.aws.amazon.com/console/home?region=us-east-1# 3.Use Alexa_skill.json to deploy your new skill 4.Upload Lambda_arm_control.zip to deploy your Lambda function 5.Connect three part together, and now you can see the topic published in AWS ioT MQTT client when you give new voice command to the Alexa