Closed DasShubhadeep closed 4 years ago
I was able to implement Telegram bot with some hacking, ie capturing stdout from terminal, but this is not recommended behaviour
@GraphGrailAi I see. But how did you integrate or send command to blender model using the bot? Can you share the steps?
@stephenroller could you suggest? I tried these changes:
tasks:
default:
onboard_world: MessengerBotChatOnboardWorld
task_world: MessengerBotChatTaskWorld
timeout: 1800
agents_required: 1
task_name: chatbot
world_module: parlai.chat_service.tasks.chatbot.worlds
overworld: MessengerOverworld
max_workers: 30
opt:
debug: True
models:
blender:
model: local_human
model_file: models:blender/blender_90M/model
override:
model: local_human
no_cuda: True
no_cuda: True
additional_args:
page_id: 580729918709559 # Configure Your Own Page
It loads, but no actual communication:
python parlai/chat_service/services/browser_chat/run.py --config-path parlai/chat_service/tasks/chatbot/config_blender.yml --port 10001
[ optional arguments: ]
[ datapath: /home/joo/ParlAI/data ]
[ Chat Services: ]
[ config_path: parlai/chat_service/tasks/chatbot/config_blender.yml ]
[ is_debug: False ]
[ password: None ]
[ Browser Chat: ]
[ port: 10001 ]
[ Current ParlAI commit: 855b8bf1abae78d60de6a3bfe37eb4b654bf3f8d ]
[ warning: overriding opt['model'] to local_human (previously: transformer/generator )]
[ warning: overriding opt['no_cuda'] to True (previously: False )]
Enter [DONE] if you want to end the episode, [EXIT] to quit.
[I 200519 18:21:45 web:2246] 101 GET /websocket (127.0.0.1) 0.60ms
[I 200519 18:21:45 sockets:39] Opened new socket from ip: 127.0.0.1
[I 200519 18:21:45 sockets:40] Current subscribers: {'fe32184c-ed96-4994-857f-b5124a0e5568': <parlai.chat_service.services.websocket.sockets.MessageSocketHandler object at 0x7fae7dd4ccc0>}
[I 200519 18:22:00 sockets:59] websocket message from client: {"text": "hi"}
[I 200519 18:22:00 agents:40] Sending new message: {'id': 'Overworld', 'text': 'Welcome to the overworld for the ParlAI messenger chatbot demo. Please type "begin" to start.', 'quick_replies': ['begin']}
[I 200519 18:22:11 sockets:59] websocket message from client: {"text": "begin"}
[I 200519 18:22:11 agents:55] Received new message: {'text': 'begin', 'payload': None, 'sender': {'id': 'fe32184c-ed96-4994-857f-b5124a0e5568'}, 'recipient': {'id': 0}}
2020-05-19 18:22:11: Adding agent fe32184c-ed96-4994-857f-b5124a0e5568 to pool...
onboarding/overworld complete
starting pool
2020-05-19 18:22:12: Removing agent fe32184c-ed96-4994-857f-b5124a0e5568 from pool...
Starting task t_1...
[E 200519 18:22:12 logging:44] World default had error KeyError('legacy_seq2seq',)
World default had error KeyError('legacy_seq2seq',)
NoneType: None
Next task: None
[I 200519 18:22:12 agents:40] Sending new message: {'id': 'Overworld', 'text': 'Welcome to the overworld for the ParlAI messenger chatbot demo. Please type "begin" to start.', 'quick_replies': ['begin']}
I don’t think you want to specify local human along with the model file, those are very different settings
@klshuster
I changed model: transformer/generator
- now it seems model loaded
but another error:
Starting task t_1...
[E 200519 20:15:04 logging:44] World default had error KeyError('legacy_seq2seq',)
World default had error KeyError('legacy_seq2seq',)
Traceback (most recent call last):
File "/root/anaconda3/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/var/www/ParlAI/parlai/chat_service/core/world_runner.py", line 128, in _world_fn
return self._run_world(task, world_name, agents)
File "/var/www/ParlAI/parlai/chat_service/core/world_runner.py", line 95, in _run_world
world = world_generator(self.opt, agents)
File "/var/www/ParlAI/parlai/chat_service/tasks/chatbot/worlds.py", line 50, in generate_world
opt['shared_bot_params'][MessengerBotChatTaskWorld.MODEL_KEY]
KeyError: 'legacy_seq2seq'
Next task: None
[I 200519 20:15:04 agents:40] Sending new message: {'id': 'Overworld', 'text': 'Welcome to the overworld for the ParlAI messenger chatbot demo. Please type "begin" to start.', 'quick_replies': ['begin']}
Seems to be coming from here, but I don't quite understand why it's set there. Will defer to Kurt.
I fixed this line with MODEL_KEY = 'blender'
and now i see actual response from model in terminal, but not in browser because of minor error in code ParlAI/parlai/chat_service/tasks/chatbot/worlds.py",
line 82, in parley
response['id'] = ''
Full traceback:
[I 200519 22:33:15 agents:55] Received new message: {'text': 'hi', 'payload': None, 'sender': {'id': 'a39b4190-b951-4676-b2f3-a71ff6cee5b5'}, 'recipient': {'id': 0}}
===act====
{'episode_done': False, 'text': 'hi', 'payload': None}
~~~~~~~~~~~
[E 200519 22:33:16 base_events:1285] Task was destroyed but it is pending!
task: <Task pending coro=<WebSocketProtocol13.write_message.<locals>.wrapper() running at /root/anaconda3/lib/python3.6/site-packages/tornado/websocket.py:1102>>
/root/anaconda3/lib/python3.6/asyncio/base_events.py:509: RuntimeWarning: coroutine 'WebSocketProtocol13.write_message.<locals>.wrapper' was never awaited
self._ready.clear()
===response====
{'id': 'TransformerGenerator', 'episode_done': False, 'text': 'hi , how are you today ? i just got back from a long day of work , how about you ?'}
~~~~~~~~~~~
[E 200519 22:33:19 logging:44] World default had error RuntimeError('Message already contains key `id`. If this was intentional, please use the function `force_set(key, value)`.',)
World default had error RuntimeError('Message already contains key `id`. If this was intentional, please use the function `force_set(key, value)`.',)
Traceback (most recent call last):
File "/root/anaconda3/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/var/www/ParlAI/parlai/chat_service/core/world_runner.py", line 128, in _world_fn
return self._run_world(task, world_name, agents)
File "/var/www/ParlAI/parlai/chat_service/core/world_runner.py", line 99, in _run_world
ret_val = world.parley()
File "/var/www/ParlAI/parlai/chat_service/tasks/chatbot/worlds.py", line 82, in parley
response['id'] = ''
File "/var/www/ParlAI/parlai/core/message.py", line 26, in __setitem__
'please use the function `force_set(key, value)`.'.format(key)
RuntimeError: Message already contains key `id`. If this was intentional, please use the function `force_set(key, value)`.
Next task: None
commented line 82 and it works. seems we need more review: how it affects logging, user sessions, etc behavior.
What i faced is that session is not persistent: using another browser to chat(i.e. another user) - the same dialog history used for response
MODEL_KEY = 'legacy_seq2seq'
is defined there as this a demo set of worlds; indeed this is what you'd change to be the key in the config file under models
(as you did with 'blender'
)
line 82 is a bug, thanks for flagging
the session is indeed not persistent in the current websockets implementation, we have not had a chance to get that into ParlAI yet
@klshuster That's why i asked about Telegram implementation in another issue. Do you have a plan of session implementation?
If not i would ask 3 simple things to get Telegram ready: 1) I need access to global function that get user_message as input 2) I need access to global function that pass user_message to model and get response 3) I need access to global function that reinput dialog history to model to serve context. That is enough for me, but inspecting code is curious/
What i found is that all 3 variable are in file https://github.com/facebookresearch/ParlAI/blob/6c2e66a9f696571fd336ea58365841fd32d15319/parlai/chat_service/tasks/chatbot/worlds.py#L69
ie
a = self.agent.act()
if a is not None:
if '[DONE]' in a['text']:
self.episodeDone = True
else:
print("===act====")
print(a)
print("~~~~~~~~~~~")
self.model.observe(a)
response = self.model.act()
print("===response====")
print(response)
print("~~~~~~~~~~~")
response['id'] = ''
self.agent.observe(response)
So
1) self.agent.act()['text']
- user message
2) self.model.act()['text']
- bot message (response from model)
3) self.agent.observe(response)
- reinput of history (not sure of type - dict or what)
What do you suggest how do i use these variables in my code? Can i simply import this file https://github.com/facebookresearch/ParlAI/blob/master/parlai/chat_service/tasks/chatbot/worlds.py to my code and invoke this vars for example
MessengerBotChatTaskWorld.parley(agent.act()['text'])
?
The way these chat service worlds work is that parley
is called repeatedly until self.episodeDone
is True
. If you'd like, you could import that world and subclass it to modify the parley
function.
Calling a = self.agent.act()
will return the user's message in the form of an action/observation dict, or a Message
, from which you can access the user message via the text
field.
calling self.model.observe(a)
will have the model view the user's message, and then subsequently calling response = self.model.act()
will return the model's response.
An important aspect of the Message
is the episode_done
field; if you set this to False
, the model will continue to track history of the conversation. Setting it to be True
will clear the model's history after the model responds. So, you can either have the model repeatedly observe new human actions with episode_done: False
, or you can manually track the history and repeatedly observe episode_done: True
with the text
field being the full history
@klshuster yes, got that, good.
What is opt
structure ?
tried invoking
obj = MessengerBotChatTaskWorld(opt={'models':'blender'}, agent='ChatbotAgent', bot='blender')
user_message = obj.parley()
got:
'str' object has no attribute 'observe'
The opt structure is as specified in the config here
If you have not already, I would advise that you read through the documentation for running a chat service world here. The manager for the chat service takes care of agent creation here and task worlds are instantiated and launched via the world runner
Of course, i inspected config already (as you can see i tried to pass dict obj = MessengerBotChatTaskWorld(opt={'models':'blender'}
)
But still, i got 'str' object has no attribute 'observe'
- my guess that it's because str passed to agent='ChatbotAgent'
so, what is agent should be.
And yes, i have read the docs https://parl.ai/docs/tutorial_chat_service.html - but that seem a complex multi-step task and even these details does not make it clear
I see you operated these concepts as a core developer, but for me it's hard. What i ask is simple step-by-step import statements and how do i pass parameters from code to be able to invoke the main: a = self.agent.act()
self.model.observe(a)
self.model.act()
or parley()
Now i cannot invoke parley()
.
Should i also somehow call a https://github.com/facebookresearch/ParlAI/blob/6c2e66a9f696571fd336ea58365841fd32d15319/parlai/chat_service/tasks/chatbot/worlds.py#L43 and if yes what is agents
?
What i do expect is that:
1) "Simply import this worlds.py to your code"
2) "Than call these 2 functions with parameters: ... . That will load model and prepare world."
3) "Now you call parley()
this way someobject.parley()
and inside you can access .model.observe(a)
like this: ..."
That's it
A simple barebones approach is the following:
opt
in a config.yml
format (similar to the demo ones), and fill in the options as is done in this main functionMessengerChatBotTaskWorld
into your code and subclass itgenerate_world
with your setup opt
, and agents
where agents = [human_agent]
created in step 2. Generate world will create a model from the shared model parameters generated in step 3.parley
function to do as you please; call world.parley()
to produce responses.Let me know if these instructions are helpful
@stephenroller @klshuster
What i try to achieve is manual session storage when user interact with model.
Now i can use Pexpect (terminal emulator) and run multiple processes with
/root/anaconda3/bin/python3.6 safe_interactive.py -t blended_skill_talk -mf zoo:blender/blender_90M/model
Every terminal has separate user session - what we actually need. But it is memory consuming - every python process takes 1300-2000MB of RAM on server, so several sessions will crash server. How do avoid this, sharing the same model for any user?
Read the worlds tutorial to understand how agent sharing works. https://parl.ai/docs/tutorial_worlds.html
Read the worlds tutorial to understand how agent sharing works. https://parl.ai/docs/tutorial_worlds.html
done, but this is theoretical material that needs practice in code...
Solved sessions myself.
That was a tricky coding: i just implemented a['episode_done'] = True
for every message and do a manual history tracking with collecting it in text
field.
The main thing i did not figured out at the beginning was that:
An important aspect of the Message is the episode_done field; if you set this to False, the model will continue to track history of the conversation. Setting it to be True will clear the model's history after the model responds. So, you can either have the model repeatedly observe new human actions with episode_done: False, or you can manually track the history and repeatedly observe episode_done: True with the text field being the full history
@GraphGrailAi I'm facing the same problem, how did you manage to solve the sessions issue ? I'm running the 90M blender on a websocket and interacting with it using the localhost, I've already created the config.yml file as you described and edited the worlds.py file to make it work with 'blender'. But i'm still getting the same context from different sessions. so, how did you use the self.episodeDone = True & self.model.observe(a) & response = self.model.act() to make the bot clear history after every iteration of parley() and retrieve the history before responding to the human agent ???
@GraphGrailAi Got the same confusion as above. Would be great if you could elaborate.
@klshuster
An important aspect of the
Message
is theepisode_done
field; if you set this toFalse
, the model will continue to track history of the conversation. Setting it to beTrue
will clear the model's history after the model responds. So, you can either have the model repeatedly observe new human actions withepisode_done: False
, or you can manually track the history and repeatedly observeepisode_done: True
with thetext
field being the full history
What format would the text
field be ? The previous human inputs concatenated with \n
? Or something else ?
@klshuster
An important aspect of the
Message
is theepisode_done
field; if you set this toFalse
, the model will continue to track history of the conversation. Setting it to beTrue
will clear the model's history after the model responds. So, you can either have the model repeatedly observe new human actions withepisode_done: False
, or you can manually track the history and repeatedly observeepisode_done: True
with thetext
field being the full historyWhat format would the
text
field be ? The previous human inputs concatenated with\n
? Or something else ?
Human and bot inputs concatenated with \n
A simple barebones approach is the following:
- Construct an
opt
in aconfig.yml
format (similar to the demo ones), and fill in the options as is done in this main function- Create a human agent like so
- Create a model like so
- Import a
MessengerChatBotTaskWorld
into your code and subclass it- Call
generate_world
with your setupopt
, andagents
whereagents = [human_agent]
created in step 2. Generate world will create a model from the shared model parameters generated in step 3.- Override the
parley
function to do as you please; callworld.parley()
to produce responses.Let me know if these instructions are helpful
can you elaborate this ? i am also trying to implement this in Telegram as well . I tried following these steps but its hard to understand. How should i create human agent ? if like this . What is 'task_id' and other parameters
- Create a human agent like so
Thank you for your time .
I can explain what worked for me, but I think there is an easier way:
interactive.py
script human_agent = LocalHumanAgent(opt)
class EvalAgent(LocalHumanAgent):
def observe(self, msg):
self.supplied_context.append(msg['text'])
print(
display_messages(
[msg],
ignore_fields=self.opt.get('display_ignore_fields', ''),
prettify=self.opt.get('display_prettify', False),
)
)
def act(self):
reply = Message()
reply['id'] = self.getID()
reply_text = "Hello world"
print(f"[EvalAgent] {reply_text}")
reply_text = reply_text.replace('\\n', '\n')
reply['episode_done'] = False
# reply.force_set('episode_done', True) # enabling this ends an episode if you want to reset chat history
return reply
Then on line 88 replace human_agent = LocalHumanAgent(opt)
with human_agent = EvalAgent(opt)
Interactive.main(model_file='zoo:blender/blender_90M/model', t="convai2")
t="convai2"
will enable personas, simply run Interactive.main(model_file='zoo:blender/blender_90M/model')
for no personaparley()
, just do human_agent.finished = True
agent.self_observe({'text':utt})
will allow you to manually set the history of the blender botHope this helps. Let me know if you have any questions that I might be able to answer too!
@viprocket1 For implementing in Telegram, i'd recommend checking out the chat service tutorial and the chat service readme.
Regarding a human agent - task_id
is just a unique identifier for the "task" where your human is. The name is a bit of an artifact; think of it more as e.g. "world_id" or "session_id" (a "task" is basically a thread running an interaction between a model and a human)
thank you @klshuster and @josharnoldjosh for responding . I will try both the approaches and get back to you .
This issue has not had activity in 30 days. Please feel free to reopen if you have more issues. You may apply the "never-stale" tag to prevent this from happening.
Sorry, has anyone been able to figure out how to allow multiple agents and separate history/context? Would really appreciate it if could see the code. Thanks in advance.
the canonical example that works in ParlAI is to use the Facebook Messenger chat service implementation. I would recommend reading through that code to understand how it works
Hi, I would like to integrate blender with my application and send text to it remotely. I would like to start a WebSocket based server with
Blender 90M
version but I am not able to understand what changes are required in this config file to load blender 90M model with the server.https://github.com/facebookresearch/ParlAI/tree/master/parlai/chat_service/tasks/chatbot
Any help?