-
I've seen on the Pixtral-12B Colab notebook that:
```
To format the dataset, all vision finetuning tasks should be formatted as follows:
[
{ "role": "user",
"content": [{"type": "text", "t…
-
### Description of the task
Currently the VISION_TO_ROBOT_DELAY_S in step_primitive is a tuned constant. Given some of the fluctuation in RTT seen at robocup, there is a case to be made to make thi…
mkhlb updated
2 weeks ago
-
The base robot code needs an example of performing both navigation (moving robot as well as rotating the robot) and subsystem operation (i.e. moving arm subsystem and then shooting) in response to vis…
-
I executed the code as below.
```python
import simpler_env
from simpler_env.utils.env.observation_utils import get_image_from_maniskill2_obs_dict
import mediapy
import sapien.core as sapien
im…
-
The vision is essentially to have "tasks" that are defined by (1) their reward scales, (2) starting states/starting env, and (3) termination states. The environment then combines the tasks during trai…
-
This is a master issue to track all items related to the November 1st MultiNet Release. The motivation & scoping for this release is below. We follow w/ the specific issues being tracked with specific…
-
----------------------------------------------------------------------------------------------------
| Required Info | …
-
I modified the xacro model in ros2humble2.1.1 you provided, adding part of the model and a kinect camera model. I successfully used mock and rviz2 to drive the robot arm, but when I wanted to use **ga…
-
Objective: Finish vision features we were working on last year.
Our highest priority is probably aligning the robot to the speaker. We have already managed to align the robot, but it is always a bit…
-
Deprecating the use of the vision sensor to determine color signatures to skip rings by, we move to the optical sensor, in order to determine the color of the ring.
Similar to the previous impleme…