opennars / OpenNARS-for-Applications

General reasoning component for applications based on NARS theory.
https://cis.temple.edu/~pwang/NARS-Intro.html
MIT License
89 stars 39 forks source link

[PERCEPTION] Nalifier for automatic building of taxonomy from rich instances with numeric attributes #216

Closed patham9 closed 1 year ago

patham9 commented 1 year ago

While NAL 1-3 is meant to be a logic of categorization, in practice the generic control mechanism built for contextual relational reasoning does not allow the reasoner to effectively identify new perceptual (running-ID-based) instances as already known ones based on a large set of perceived properties and their overlap with the known instances. Furthermore, handling of numeric attributes was tricky as low-freq properties can't establish a similarity, only skilled users were introducing "anti-properties" to work around this issue for perceptual properties, which however didn't resolve the control issue for making this work at large scale, and doesn't fully capture value similarity which isn't dependent on the closeness of the values to the extremes, just dependent on the distance between the points.

This script will be a frontend which can be plugged into ONA for Narsese input to pass through first. It will automatically remember a certain max. amount of instances to match new instances to if they exceed a match threshold in terms of similarity truth evaluation. And it will automatically build a taxonomy of concepts based on common properties the strongest inheritance direction from truth evaluation. Hereby, more general concepts higher up in the hierarchy, according to NAL inference, shrink in intension (less properties to be considered) and increase in extension (more instances matching to it), whereby only a certain max. amount of concepts will be kept in each generalization layer of which there is a max. amount as well.

patham9 commented 1 year ago

idea slides: https://docs.google.com/presentation/d/1x37XOr2B-cUlQS2_GQ6z7_uiYtAyTK2aN_Nx2G1hGog/edit?usp=sharing

patham9 commented 1 year ago

New version (v0.3, with YOLOv4): https://colab.research.google.com/drive/1dsYE66Gt_fHxjv_cgecgBjz70cGSWGPc?usp=sharing Without YOLO (v0.3): https://colab.research.google.com/drive/1QBSXmhuNHlxdN4MkGUYXGO5DDYGI1ue6?usp=sharing

patham9 commented 1 year ago

New version, 0.4 with concept formation based on common properties: With YOLOv4: https://colab.research.google.com/drive/1JpX5NQpIgXSI726qqEKIOs8ePHbk2TM4?usp=sharing Without YOLOv4: https://colab.research.google.com/drive/1R5z9QCcSQ9la78Qb8TP1BL1kwEt0w8Je?usp=sharing

patham9 commented 1 year ago

TODO prototypes need to keep track of their own valueReporter for each of its properties, because when evaluating the most significant differences of new instance, it needs to be relative to the most similar instance (typical hair length of person A) or best fitting category (typical hair length of human). This will be in v0.5 of the worksheets.

patham9 commented 1 year ago

https://github.com/opennars/OpenNARS-for-Applications/commit/b2a264f0c4b2f41417e0a2c67be107002eae0f19

TODO add for Transbot for real-time vision of color, not just static image analysis like in the Colab.

patham9 commented 1 year ago

TODO make sure revision of prototypes works in case an instance is mentioned again (as can happen if for instance the preprocessing also tracks instances over time)

patham9 commented 1 year ago

Nalifier is merged into master ( https://github.com/opennars/OpenNARS-for-Applications/pull/226 ), and a working example with real-time camera-feed is provided and showcases on Youtube: https://www.youtube.com/watch?v=ny_AVZLE2X0 New perception systems for ONA will be based on this new perceptual reasoning capability and further improvements will be driven by practical examples.