Closed tanay1337 closed 8 years ago
This would make use of pkgutil
and while implementing this, we would redesign the structure of GreyMatter
to:
__init__.py
profile_populator.py
stt.py
tts.py
modules/
__init__.py
business_news_reader.py
.... so on
Do you wish to take this @neilnelson?
Tanay, Yes, I realize that how we structure the directories is quite important. I have been thinking about it and do not have a clear recommendation at the moment though I expect to have a much better idea after some study.
I will go ahead and do the the Jasper-Melissa merge. Your recent PocketSphinx is broken #39 is something I will be looking at first and your creation of stt.py makes good sense. You mentioned you had a design idea for brain.py and you might present that. I hope we will have many new ideas and the nature of progress is that only a few survive but we can only find those few by going through the many.
Ping on this issue @neilnelson!
@tanay1337, I am spending most of my time in this area and have begun sketching the code. Hopefully it will not be too far away from what you may appreciate.
As a bit of teaser, I realized that Jasper is not directly using the WORDS lists to identify which action is used for a user expression. Jasper's Weather.py illustrates this where we have
WORDS = ["WEATHER", "TODAY", "TOMORROW"]
and then the user's expression is checked for execution of which action in isValid(text):
return bool(re.search(r'\b(weathers?|temperature|forecast|outside|hot|' + r'cold|jacket|coat|rain)\b', text, re.IGNORECASE))
This seems a glaring disconnect and what is missing is that the words given in the WORDS list and the re.search
list need to be joined into a single list and then a method used to match the expression text against the various lists to find the action. This text-to-action process is centered in both Melissa's and Jasper's brain.py modules.
The new design will be to have a WORDS multi-level list format that will combine Jasper's single word format with Melissa's multi-word format and prioritize expression word matches (action selection) toward mulit-word matches and whether a multi-word match is completed in order. isValid(text):
will then no longer be needed (noting that we would like to use Jasper's action modules with a minimum of change).
After the prior code is done I am thinking we could grab the synonyms for all the words from ConceptNet and then have an extra translation step where the words in the expression text would be first matched as just noted and then matched against the synonyms, for the words not matched, and with the synonyms pointing to the words from the WORDS lists.
Although this may seem compute intensive, the entire structure can be built once given that the available action modules are fixed and either done by each user or downloaded. When processing the expression text, the processing time is linear in the number of words in the text rather than the number of words throughout the WORDS list as is done now (keeping in mind that the current methods will exit when a match is hit, but that the average hit is roughly linear in the volume of words). We then have a way to expand the number of actions (Jasper modules) to a considerable degree with little affect on match time.
This new method is better for sentence expressions as against command-and-control.
I can see that your interest in Rapiro could utilize a good number of new actions which should fit well here.
Perfect @neilnelson, that is exactly what I had in mind! :D
Let's update the directory structure to the following:
main.py
melissa/
__init__.py
brain.py
profile_populator.py
stt.py
tts.py
modules/
__init__.py
business_news_reader.py
.... so on
I just saw your new directory structure and it looks good. It may be that additional services like stt and tts would be added to the melissa directory and those could be grouped into a services directory. But that is farther along.
The following code obtains synonyms and associations for the words that connect to the Jasper Weather.py module.
import requests import json def _decode_list(data): rv = [] for item in data: if isinstance(item, unicode): item = item.encode('utf-8') elif isinstance(item, list): item = _decode_list(item) elif isinstance(item, dict): item = _decode_dict(item) rv.append(item) return rv def _decode_dict(data): rv = {} for key, value in data.iteritems(): if isinstance(key, unicode): key = key.encode('utf-8') if isinstance(value, unicode): value = value.encode('utf-8') elif isinstance(value, list): value = _decode_list(value) elif isinstance(value, dict): value = _decode_dict(value) rv[key] = value return rv synonym_url = "http://conceptnet5.media.mit.edu/data/5.4/search?rel=/r/Synonym&end=/c/en/" association_url = "http://conceptnet5.media.mit.edu/data/5.4/assoc/list/en/" synonyms = set() associations = set() words=['weather','temperature','forecast','outside','hot', 'cold','jacket','coat','rain'] print '\n\nGet ConceptNet5 Synonyms' for word in words: print '\nword: '+word url = synonym_url + word r = requests.get(url) syms = json.loads(r.text, object_hook=_decode_dict) for synonym in syms['edges']: if synonym['start'].startswith('/c/en/'): synonyms.add(synonym['surfaceStart']) print 'surfaceStart: ' + synonym['surfaceStart'] \ + ' weight: ' + str(synonym['weight']) print "\n\nGet ConceptNet5 Assocations between - 'weather' and word" for word_pos in range(1, len(words)): print '\nword: ' + words[word_pos] url = association_url + words[0] + ',' + words[word_pos] \ + '?filter=/c/en' r = requests.get(url) assoc = json.loads(r.text, object_hook=_decode_dict) for assocation in assoc['similar']: elements = assocation[0].split('/') if elements[-1] != 'neg': associations.add(elements[-1]) print elements[-1] + ' rank: ' + str(assocation[1]) print '\n\n' print 'synonyms:' print synonyms print '\n' print 'associations:' print associations print '\n' print 'intersection:' print synonyms.intersection(associations) print '\n\n'
Run it as follows (linux)
python synonyms.py > synonyms.log
synonyms.log is
Get ConceptNet5 Synonyms word: weather surfaceStart: weather event weight: 1.58496250072 surfaceStart: dirty weather weight: 1.0 word: temperature surfaceStart: fever weight: 1.0 surfaceStart: hot weight: 1.0 word: forecast surfaceStart: forward weight: 1.0 surfaceStart: prudence weight: 1.0 surfaceStart: prediction weight: 1.0 surfaceStart: project weight: 1.0 word: outside surfaceStart: abroad weight: 1.0 surfaceStart: out of weight: 1.0 word: hot surfaceStart: phat weight: 1.0 surfaceStart: good-looking weight: 1.0 surfaceStart: sexy weight: 1.0 surfaceStart: febrile weight: 1.0 surfaceStart: spicy weight: 1.0 surfaceStart: wicked weight: 1.0 surfaceStart: horny weight: 1.0 surfaceStart: live weight: 1.0 surfaceStart: salacious weight: 1.0 surfaceStart: beautiful weight: 1.0 surfaceStart: peng weight: 1.0 word: cold surfaceStart: cool weight: 1.0 surfaceStart: parky weight: 1.0 word: jacket surfaceStart: album weight: 1.0 word: coat surfaceStart: sheet weight: 1.58496250072 surfaceStart: dress weight: 1.0 surfaceStart: jacket weight: 1.0 surfaceStart: smear weight: 1.0 word: rain surfaceStart: shower weight: 2.32192809489 surfaceStart: shower weight: 2.32192809489 surfaceStart: downpour weight: 1.0 surfaceStart: wet weight: 1.0 surfaceStart: rainfall weight: 1.0 Get ConceptNet5 Assocations between - 'weather' and word word: temperature temperature rank: 0.801168331722 weather rank: 0.781969596663 laodicean rank: 0.775973187994 in_winter_it rank: 0.77273625475 winterproof rank: 0.75838132773 winter_storm rank: 0.757638421729 unwarm rank: 0.75655503079 cricondentherm rank: 0.753923619653 pyrectic rank: 0.750519651605 cold_snap rank: 0.746733248307 attery rank: 0.744385910488 body_temperature rank: 0.743514567226 heatless rank: 0.742556832563 absolute_zero rank: 0.734034480014 lukewarm rank: 0.730497499086 unheated rank: 0.729941302903 carbunculation rank: 0.727250774096 slow_oven rank: 0.725722549968 tepid rank: 0.724931785421 overweather rank: 0.724275658432 word: forecast weather rank: 0.993171671126 fair_weather rank: 0.947328481964 overweather rank: 0.94562002231 blizzicane rank: 0.939677939931 dirty_weather rank: 0.936058996426 droughtiness rank: 0.920119538022 snowstorm rank: 0.918685057349 nimbiferous rank: 0.902871858384 atmospheric_phenomenon rank: 0.898535722701 hailstorm rank: 0.898339526523 stormworthy rank: 0.885567017634 weatherwise rank: 0.880921925649 hydrometeor rank: 0.877554294907 wet_grind rank: 0.871590133964 storm rank: 0.864112042382 rainy rank: 0.860757852961 precipitation rank: 0.85842724801 thunderstorm rank: 0.85695396564 raindrift rank: 0.852733395822 heavy_rain rank: 0.845416208319 word: outside weather rank: 0.956335613102 overweather rank: 0.918004061996 fair_weather rank: 0.916261901968 dirty_weather rank: 0.892774179647 blizzicane rank: 0.888986539704 atmospheric_phenomenon rank: 0.883145733997 droughtiness rank: 0.878812909053 snowstorm rank: 0.875147755415 nimbiferous rank: 0.873773964858 hailstorm rank: 0.849992110033 weatherwise rank: 0.845587895521 storm rank: 0.837045374649 stormworthy rank: 0.836162675837 hydrometeor rank: 0.828791473219 wet_grind rank: 0.822895152244 aweather rank: 0.813938757758 thunderstorm rank: 0.809824791046 raindrift rank: 0.809785099966 usually_white rank: 0.809592133727 rainy rank: 0.808698311646 word: hot heatwave rank: 0.881334233123 hot rank: 0.838901765756 sultry rank: 0.82567847834 hottish rank: 0.823270563092 heat_wave rank: 0.819117498275 ultrahot rank: 0.818667923063 boil_hot rank: 0.817333161981 torrid rank: 0.811484394127 calefactory rank: 0.803899749542 sunkissed rank: 0.802168921479 sweltry rank: 0.800522113452 hotness rank: 0.799259325239 hotcha rank: 0.798881520361 hotly rank: 0.793894817162 heatedly rank: 0.790148429343 thermal rank: 0.788565148567 feverish rank: 0.788014435763 some_woman rank: 0.786318602775 overheat rank: 0.775610697102 word: cold cold_snap rank: 0.968009891818 in_winter_it rank: 0.932599278035 cold rank: 0.901681525975 coldness rank: 0.878656307619 acold rank: 0.875985324443 mountaintops rank: 0.874937767173 metal_thing rank: 0.874020667299 coldly rank: 0.872404182622 cold_bloodedly rank: 0.870504250168 frigid rank: 0.86828298225 frore rank: 0.867522091715 freeze_cold rank: 0.86515808909 parky rank: 0.862749123317 cold_as_ice rank: 0.860028556072 ultracold rank: 0.859426036892 head_cold rank: 0.859376183907 iciness rank: 0.85848138065 attery rank: 0.854843662752 freezingly rank: 0.854336138221 word: jacket weather rank: 0.90426510029 overweather rank: 0.869187247678 fair_weather rank: 0.85528536756 blizzicane rank: 0.8534089508 droughtiness rank: 0.840577806251 snowstorm rank: 0.838592851347 dirty_weather rank: 0.833391373678 weatherworn rank: 0.812912178581 hailstorm rank: 0.811178279445 atmospheric_phenomenon rank: 0.807664908752 nimbiferous rank: 0.802525002034 stormworthy rank: 0.797823748424 hydrometeor rank: 0.796297407053 weatherwise rank: 0.794318457 wet_grind rank: 0.792631182959 raindrift rank: 0.792316636134 rainy rank: 0.792288190636 rainpants rank: 0.78840638761 precipitation rank: 0.781353987353 storm rank: 0.780868614257 word: coat raincoat rank: 0.832462838779 weather rank: 0.733852355024 winter_coat rank: 0.717895289275 overweather rank: 0.71260840912 coat rank: 0.697548885658 snowclad rank: 0.693600810951 blizzicane rank: 0.689180849329 droughtiness rank: 0.687393853973 fair_weather rank: 0.679179003669 snowstorm rank: 0.67369274717 weatherworn rank: 0.667892754381 raindrift rank: 0.667673150911 duffel_coat rank: 0.666639162725 rainwear rank: 0.664520133377 dirty_weather rank: 0.663043019423 rainy rank: 0.655308952625 atmospheric_phenomenon rank: 0.650139904758 rainpants rank: 0.646695848535 hydrometeor rank: 0.646633462887 hailstorm rank: 0.646585536406 word: rain rain rank: 0.971896037457 wet_grind rank: 0.967655530233 droughtiness rank: 0.965124160067 rainer rank: 0.953105183142 rainy rank: 0.951929932927 heavy_rain rank: 0.950390002908 rainscape rank: 0.948685026316 mizzle rank: 0.94839012789 weather rank: 0.945486205196 hydrometeor rank: 0.938425196603 rainpants rank: 0.938161418887 precipitation rank: 0.933282480483 drizzle rank: 0.933215645437 rain_cloud rank: 0.925019510706 raincloud rank: 0.92389676429 precipitation_process rank: 0.92291259851 fall_water rank: 0.909407386907 raintight rank: 0.90929245131 rainfall rank: 0.908558201848 blizzicane rank: 0.906329567324 synonyms: set(['beautiful', 'rainfall', 'prediction', 'spicy', 'sheet', 'dress', 'horny', 'album', 'jacket', 'parky', 'febrile', 'hot', 'peng', 'live', 'wet', 'forward', 'out of', 'wicked', 'shower', 'fever', 'salacious', 'downpour', 'dirty weather', 'prudence', 'good-looking', 'sexy', 'abroad', 'cool', 'weather event', 'phat', 'project', 'smear']) associations: set(['raincloud', 'cold_bloodedly', 'ultracold', 'thermal', 'pyrectic', 'cold', 'fall_water', 'weatherwise', 'stormworthy', 'ultrahot', 'duffel_coat', 'atmospheric_phenomenon', 'calefactory', 'snowclad', 'rainpants', 'frigid', 'precipitation', 'laodicean', 'boil_hot', 'feverish', 'rainscape', 'dirty_weather', 'hotness', 'snowstorm', 'acold', 'tepid', 'freezingly', 'mizzle', 'droughtiness', 'drizzle', 'head_cold', 'lukewarm', 'hydrometeor', 'nimbiferous', 'iciness', 'some_woman', 'aweather', 'blizzicane', 'raincoat', 'rainwear', 'attery', 'mountaintops', 'overweather', 'precipitation_process', 'cricondentherm', 'hotly', 'weather', 'sweltry', 'frore', 'sunkissed', 'overheat', 'temperature', 'coldly', 'freeze_cold', 'metal_thing', 'storm', 'heatless', 'unheated', 'raindrift', 'hailstorm', 'slow_oven', 'cold_as_ice', 'absolute_zero', 'heatedly', 'usually_white', 'sultry', 'body_temperature', 'winterproof', 'rainer', 'rainfall', 'hotcha', 'winter_coat', 'rain_cloud', 'raintight', 'hottish', 'rainy', 'parky', 'weatherworn', 'hot', 'fair_weather', 'carbunculation', 'heatwave', 'thunderstorm', 'rain', 'unwarm', 'heat_wave', 'coat', 'wet_grind', 'heavy_rain', 'torrid', 'coldness', 'cold_snap', 'winter_storm', 'in_winter_it']) intersection: set(['rainfall', 'hot', 'parky'])
Of course we can think about how to make improvements, but I suggest we are very quickly getting to a point where an automatic extraction of synonyms for matching user expressions has much to be desired, to where we are looking at the use of additional methods not available to the vanilla user along with manual methods.
We can use the above to manually identify additional words to be added to the action module's word/phrase list, which could easily have a good number of entries.
I am constructing an SQLite3 DB from the action module's words and a procedure that connects the user expression to the action. At this point, and noting that we could apply other procedures such as selecting for common words, using WordNet and other sources, using stemming to get related word forms, all of which are not available to the vanilla user, we can manually construct an SQLite3 DB and deliver it in, say, a data directory. This then (later) removes the need for a WORDS attribute in the action modules and goes toward, for example, a screen for us and the user to maintain the SQLite3 data.
Let's discuss ConceptNet in it's respective issue. This one is for the completion of the Dynamically Loading of modules, which is a priority. Other than that, everything looks cool to me @neilnelson :) My only concern on your implementation as well as eradication of WORDS
is that, any developer contributing to the modules, will have to make a change to the Db as well along with adding the file in the modules
directory. That will defeat our aim of making things easier for developers, which was basically the aim of this issue itself (not having to make changes anywhere expect requirements.txt
.
Completed the dynamic load changes to the Melissa modules. Completed the program to create the SQLite DB using the new WORDS attribute in the modules.
Starting the brain.py changes to use the SQLite DB and dynamic module load. Will also need to write a new profile.py singleton from which the modules will get profile info.
@neilnelson Did you read my comment on SQLite DB usage?
I can see that my earlier remarks contributed to some confusion.
I am not eradicating WORDS at this time. WORDS is used in Jasper and is part of the dynamic loading (only used to validate a module in Melissa). Here is an example of the new WORDS in business_news_reader.py.
WORDS = {'news_reader': {'groups': [['business', 'news']]}}
The format is different from Jasper because Melissa is using groups of words and may have several functions in the module that may be used by brain.py. The code will detect and process either the Jasper or Melissa format. Developers will create modules in the Melissa format with the addition of the WORDS constant.
The SQLite DB setup is run when Melissa starts if it does not exist or it may be run separately to create the DB from the WORDS constant in the modules. The basic reason for the SQLite DB is that words in the user expression need to be matched against the words associated with a module's function. Melissa is using an if else
chain, Jasper is using a word-usage check function run for each module in priority order, the SQLite DB is using SELECTs to connect words to functions. These are all performing the same general function. The advantage of the SQLite DB is that with the related code, the processing is roughly linear to the length of the user expression instead of linear with the number of modules as in Jasper and the current Melissa code, and it allows a considerable and easy expansion in the number of words that can be used. On the other hand, the new code will not seem that transparent. It's a trade-off for scale and flexibility.
My prior remarks on being able to use a downloadable SQLite DB and to the degree it might eventually be used to replace the WORDS constant was for future work. I mention additional and more future aspects in that it provides options and opportunities.
I am always interested in your remarks and suggested refinements. I can copy you the current code, which would normally wait until all is done, at your request.
Ah, I now get what you were trying to say! Sounds good till now. I really liked your structure of WORDS
. Will be waiting for your PR :)
This module code illustrates the Melissa WORDS format.
WORDS = {'play_shuffle': {'priority': 1, 'groups': [['party', 'time'], ['party', 'mix']]}, 'play_random': {'groups': [['play', 'music'], 'music']}, 'play_specific_music': {'groups': ['play']} }
This module code illustrates the Jasper WORDS format.
WORDS = ["WEATHER", "TODAY", "TOMORROW"] PRIORITY = 4
The following description of how a module's WORDS constant is processed can seem technical, but the basic ideas are:
The first dictionary key in the Melissa format is the function name in the module that will be associated with its word-groups, single words, and priority, and used to choose that function by brain.py. The function name key will have a value that is a dictionary of two keys 'priority' and 'groups'. 'groups' is required and 'priority' is optional. When 'priority' is omitted it is set to 0
that gives it the best priority for being selected when match-scoring between two or more module functions obtains the same value. Higher priority values reduce a function's priority.
'groups' has a list value that may be a list of words (single words) and/or lists of words (word-groups). _'playshuffle' has two word-groups in its 'groups' list. _'playrandom' has a word and a word-group. _'play_specificmusic' has a single word.
Word-groups obtain the highest score when matching words to functions in two ways. The best match occurs when the user expression contains all the words of a word-group in the same order as in the word-group. The expression may contain words between the words matched against the word-group without changing the score for the word-group.
A second, lower match level, occurs when all the words in a word-group are matched out of order.
A word-group for which one or more words are not matched is not included in scoring.
Longer matched word-groups obtain a higher score than shorter matched word-groups.
When two word-groups obtain the same highest score, the priorities ('priority') for those functions associated with the high-scoring word-groups are checked and the lowest priority (lower is better) function is selected. If two or more functions having the same highest score have the same priority then the first one returned on the SQLite query to check priority is selected. This is an undesirable scoring collision and effort should be made to avoid this occurrence by improving the word-groups for the colliding functions.
When a user expression does not match any word-group, the single words in the 'groups' list are matched and the function that obtains the highest number of single word matches in any order is selected. If two or more functions obtain the same number of single word matches their priorities are checked and the function with the lowest priority is selected. For collisions, the first function returned by the SQLite priority query is selected.
The procedure seems complex but obtains a result very close if not exactly the same as the current Melissa code. (No priority values are currently used in the reformatted Melissa modules.) The difference is that all modules are scored in parallel and the one with the highest score is selected. The WORDS constant is easy to modify allowing a developer to quickly provide any number of word-groups and single words to help a module's function become differentiated from another. A future collision checking procedure is planned.
The use of a scoring method to choose a classification is standard in AI.
The words in Jasper's WORDS list are treated as single words in the Melissa procedure above and corresponds to Jasper's matching procedure.
Jasper uses a single module function, handle, when the words for a module are matched.
Converting Jasper modules to Melissa will require modifications for mic and profile. (See twitter_interaction.py for an example of how to use tts and the new profile methods in the current Melissa.) It will likely be useful to change Jasper's WORDS list to the Melissa format to provide better matching ability.
All functions in either the Melissa or Jasper format must have a single string parameter even though that parameter may not be used by the function. That parameter is commonly the user expression.
The words in the groups
list are converted to lower case when they are loaded.
The SQLite DB is a memory DB created each time Melissa starts by reading all the modules. New modules and module changes take effect when Melissa is started or restarted. A disk SQLite3 DB can be used when debugging the module data used in brain.py by changing the following line in profile_populator.py
words_db_file = ':memory:'
to, for example
words_db_file = 'words.sqlite'
and deleting the profile.json file that will then be recreated when Melissa starts. Or the profile.json line can be changed directly, realizing that when profile_populator.py is run again, that change is lost.
SQLite Manager, a FireFox add-on providing an easy way to view disk-based SQLite3 DBs.
I realized shortly after providing the pull request that get_modules
in brain.py can be combined with assemble_words_db
in words_db.py as there is good overlap there and have the modules
dictionary be exposed with con
and cur
in words_db.py. This will also remove the class in brain.py making query
in brain.py the only function. This will provide some streamlining and simplification.
What we gain with the pending pull request is an easy way to add action modules without any changes to the existing Melissa code. Jasper's modules tend to require the installation of more software and manual set-up, but there are some interesting ones that might be used.
We also gain by having a better way to create and apply words and word groups to connect a user expression to an action.
On your email,
@neilnelson, sorry for the late reply! I was at London for Mozilla's All Hands. There are a couple of changes that need to be made:
The directory structure is completely different from what I suggested. We'll need to incorporate that to have a cleaner top level directory. Shifting of travis-build-core.sh fails tests. So, edits need to be made to .travis.yml (https://travis-ci.org/Melissa-AI/Melissa-Core/builds/138591029). It seems you have overwritten the changes that I made in these commits: c625c23, 713c9f1.
Still reviewing the rest of the code. Thanks :D
From above you suggested
main.py melissa/ __init__.py brain.py profile_populator.py stt.py tts.py modules/ __init__.py business_news_reader.py .... so on
And I think I have most of that in place except for the top-level directory with main.py. I tried the top-level directory but it was not coming together well and figured we could give that a go in a following iteration. It may be something simple I am missing to get it to come together.
Commit 1391a4a makes a change for .travis.yml, changing utilities
to setup
that may or may not be what should be done.
I can include your changes for c625c23, 713c9f1 in mine. I also have a few changes noted in my prior post here, not included in the current pull request, that would go with your changes for an additional pull request.
At the moment I am starting work on the threading, none of which will be included in the changes for the prior paragraph.
How about let's fill the gaps that are present in this issue, make it stable and then start with stuff such as ConceptNet and threading to keep single commit per issue and easier for people to comprehend the changes @neilnelson ?
I looked the Travis error here and if I use the line in question from the prior .travis.yml
curl -O https://raw.githubusercontent.com/Melissa-AI/Melissa-Core/master/utilities/travis-build-core.sh
travis-build-core.sh is found. And so travis-ci.org is not finding the new location for the file under setup as if https://raw.githubusercontent.com/Melissa-AI/Melissa-Core/master
has not yet been updated at the time of the Travis run.
@neilnelson Ah, I see! Did you make the changes to the .travis.yml
already?
@neilnelson just noticed you already did that! Don't you think it would make more sense to shift profile_populator.py
to utilities
? Other than that the changes look fine, just have the concerns on the way the PR has been filed (and a small bug in the start of the program) as mentioned in the PR itself. Rest of the PR is pending testing which will be done on the opening of a fresh PR. Thanks a lot :D
@tanay1337, Yes, the .travis.yml is changed and the new, changed one is the one travis-ci.org is using and not finding the new location for travis-build-core.sh
.
A utilities
directory is under the melissa
directory and is where I put json_decode.py
that converts unicode to utf-8
when profile.json is loaded by profile.py
. The idea for the utilities
directory is that we may have additional bits of support code that could be used in a variety of places that would be gathered there. json_decode.py
could also be used for returns from ConceptNet 5.
profile_populator.py
appears to me to be not a utility in the prior sense that then places it in the melissa directory as it is not an action module. We might decide to have other directories such as data
either at the melissa
level or in melissa
wherein we might put memory.db
and profile.json. I am not making recommendations for any of these changes at this time, and it seems that the arrangement of programs and files in my current PR will makes sense at the moment given their relatively small number.
You mentioned a bug and I am not finding a link for it. Let me know how I can see it.
@neilnelson Sounds fair enough! The idea for data
directory and a gitignored downloads
directory is good and definitely something that we can look into after we finish with this and https://github.com/Melissa-AI/Melissa-Core/issues/37. My comments on that bug can be found here: https://github.com/Melissa-AI/Melissa-Core/pull/46.
We should set up a mechanism to dynamically load the modules from
GreyMatter
tobrain.py
which will allow contributors to add third-party modules by just putting them in theGreyMatter
folder.In each of the modules, we can set a
KEYWORDS
constant to specify the keywords for ourcheck_messsage()
mechanism.