Open PeterPirog opened 2 years ago
Hello, @PeterPirog!
First of all, our Python API is not completed yet. Before, I tried to train/test RL using libtorch
.
If you need it, I'll raise the priority so that you can work. Sorry for the inconvenience.
@utilForever
Thank you for the answer. I asked about python becouse there are many python frameworks to work with RL (personally I like tensorflow with rllib framework https://docs.ray.io/en/latest/rllib.html). Making open ai- gym interface with methods reset, step will make the project much easier to work with:
import gym
env = gym.make('CartPole-v0')
for i_episode in range(20):
observation = env.reset()
for t in range(100):
print(observation)
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.close()
There is some project with this kind of environment https://github.com/albertwujj/HearthEnv but I'm afraid it's abandoned.
@PeterPirog I know that. Many RL frameworks use Python. If you need Python API, I'll start to work this soon. 🕊️
@utilForever I like RL and Hearthstone I will be happy to test it. Possibility to use two agents if both can configure decks and play with its can be very interesting experience. Maybe testing this with classic deck is good start point.
Is there any tutorial how to use RosettaStone with python RL? I know how tu use RL with open-ai gym but I'm not sure how to use Rosetta API with python. Is possible to configuirate deck by RL?
Peter