takuseno / d3rlpy

An offline deep reinforcement learning library
https://takuseno.github.io/d3rlpy
MIT License
1.25k stars 227 forks source link

[QUESTION] Offline Learning via custom MDPDataset #399

Open Charles-Lim93 opened 2 weeks ago

Charles-Lim93 commented 2 weeks ago

Greetings,

I'm looking for the document for creating a custom/own MDPDataset,

But I'm wondering how to train the model with my own MDPDataset.

Because I'm using my environment for simulation, and don't have an idea for combining with d3rlpy's environments. Is there any way to conduct learning with custom environments? (eg. Airsim, Nvidia drive sim)

Can anyone share or suggest an example code for using your own MDP dataset?

Thank you in advanced.

takuseno commented 2 weeks ago

@Charles-Lim93 Hi, thanks for the issue. d3rlpy supports interface of OpenAI Gym and Gymnasium. You can bridge arbitrary simulators via either of those interface. How to make the bridge interface is out of d3rlpy's scope. Please ask the question at their repositories.