v32itas / systems-thinking

api + api1 + api2 + api3 = ?
GNU General Public License v3.0
0 stars 0 forks source link

collaborative-innovation-network #2

Open cauerego opened 8 years ago

cauerego commented 8 years ago

I'm quite interested in this, but don't you prefer using another channel? I would of course suggest my own forums but I think you'd be intimidated by it. I just don't think GitHub Issues is a good place to have this conversation. How about a slack channel?

I still don't get what that log means, but you must have figured out I would love to see such an advanced Ai working, even if in the wild! Is this it, though? :)

v32itas commented 8 years ago

It's hard to get used to github for me because I'm not a programmer and never worked on any serious project, just my own tiny selfish ones. I suggest to use github itself for chatting. Our Gitter Channel And if you are worried about your information security, explain me your infosec problem I'm going to solve it. I already finished paranoid level and entered calculated one, which is based on sane balancing of risk probabilities. And uber paranoid are often the ones who commits worst mistakes.

v32itas commented 8 years ago

@cauerego Now when I think about it I realize that whole my life I was reverse engineering systems even at kinder garden. Those systems usually was individual humans, or groups of humans , just now when I begun crunching systems theory it looks like system related concepts are similar to that what I was trying with humans. I was disassembling their behavioral models into separate objects and discovering reasons and solutions for problems they cause. And I'm quite ok at social engineering. So nearly all my life I was self-learning by reverse engineering systems around. Which is why I'm looking into this from quite different angle (not to mention wrong definitions). But to build such {Multi-agent system](https://en.wikipedia.org/wiki/Multi-agent_system) of any kind that would matter I need human volunteers, ~5~ quality volunteers would be sufficient for a nice start. And this definition is very like-minded to me. It defines all intelligent agents (where we can add humans as well) as at least partitially independent, - now this is exactly what I'm standing for flexible, portable decentralized and independent system which in theory can loose any agent but still remain functional. So in would be possible to use some agents in between of other ones for machine learning of various kinds, based on human volunteer ideas. My own ideas are mainly for CLI and VUI for improving my own performance and for the very beginning I have in mind CLI and VUI implementation for simple and boring every day tasks of my own. VUI to shell interaction could seriously improve my manual system administration capabilities. Since I do everything in manual ways for more than half year now, I have a lot of system administration problems simplified, and I spend some additional time for classical shell scripting with linux ash and bsd sh. To get as universal habits as possible, and I got to say that there are many administrative tasks on linux that can be done in same way on nearly all distros, this especially applies to wireless management, it doesnt matter how many GUI tools you have tried, once you got it right on slackware via CLI with only raw wpa_supplicant and wpa_cli, you will be able to solve wifi problems on nearly all linux. This is what KISS - Keep it Simple Stupid means. You can solve actual problem or you can use a ton of complexity to get that feeling that somebody else is doing that for you. I'm talking about ubuntu admins that actually has no idea about linux at all. To learn unix-like systems administration slackware and openbsd on freenode would be best options. Ok enough about sysadmin stuff. Because I reverse engineered a lot of human minds and IT systems into human readable level now I feel like I'm capable to create some kind of evolving system based on Organizational Learning ,Collaborative Innovation Network ,Knowledge Management .

Which would be used to connect humans that wants to learn and advance, connecting them together with our own learning machines and to external public APIs for our own knowledge management and performance improvement. By learning individually each individual can start automating his boring tasks on his own, or possibly we might get neural network expert which is going to build neural networks for our system inter connections. Especially when currently planned user interface is VUI on CLI, both of them related to natural language very closely, which is good point for machine learning. Endless posibilities, but we need humans like me https://en.wikipedia.org/wiki/The_Fifth_Discipline humans that has less weaknesses than average one. Greed, envy, gluttony, pride... Critical and easy exploitable human vulnerabilities which I'm capable to detect before they know and exploit it to my advantage. That's unacceptable inside such Collaborative Network.

cauerego commented 8 years ago

Sorry, too much random information for me. But let me just suggest this: www.basiux.org - see if it makes any sense to you.

v32itas commented 8 years ago

It does. But not exactly what I was talking about. Learning machines would be the last pieces to complete whole project. However sometimes my head gets overloaded. This is actually simple Multi-agent system combined of human volunteers and other kind of intelligent agents, synchronized trough some collaboration platform. It's primary function is Data Mining, mainly OSSINT that is available on social networks and warehousing that Data for later analysis. Eventually whole platform would be nicely connected to external social networks and search engines, and most booring and casual tasks would be automated. So Learning machines might be used on mined data. ATM it's just about building "OSSINT Factory"

Ok so I used Kolab Groupware on CentOS as a platform for system. I took CentOS, because I think SELinux will be worth to consider in future and I have no skills with it, so better to start learning that earlier. I already seen tons of demos on twitter mining so I'm not in hurry for that. I'm currently in progress with facebook automation and data extraction. By default facebook can be quite easily absorbed into Kolab. I mean all kinds of events they can be imported into Kolab as well as contacts. And those Adress Books are actually very suitable for storing Intelligence on individuals.

Maybe now you get my idea. However everything is going fine slow but steady.

cauerego commented 8 years ago

I think now I got it, indeed. :)

But learning machine comes before data mining, imho. It's no use to mine data just to store, it needs to be done at the same time by the neural network. Trying to make a single machine or even a cluster to become smart will not work today. But there are things that will.

I just updated the site with 2 very important videos and some relevant data. Take a look.

v32itas commented 8 years ago

Probably you're right. But thing is that whole system gonna have a lot of bots for different kind of automation, mainly for OSINT gathering and filtering using predefined params. Some for surveillance inside our intranet and some for intrusion detection. So in case of people profiling I'm ok with just my own manual ways and analysis ATM. So yeah it's not much to learn from hand picked HQ people profiles. But I think that there will be much to learn from how our system works itself. Currently I'm not thinking about BIG DATA. I'm thinking about synthetic agents filtering RSS Feed's, emails and all known sources of information looking for serious articles, books and documents. Sorting everything by categories and autopublishing on the right places on our social networks profiles. Synchronizing Social network events and importing them into our platform auto adding users on our groups and pages as contacts into huge adress books.... There will be a lot of action inside our platform. As I said before I'm not a programmer and I'm noob in Data Mining stuff. I will learn that stuff eventually, but ATM I'm busy with system administration and improvement. I have few student's and they just started with basic information gathering. Currently I'm only capable for datamining or hq data extraction from hand selected sources. And yes I have no idea how to store data, because I have no Idea how I'm going to use that data later. I'm talking about giant user lists with only name lastname and profile ID and giant information feeds available via Facebook API. This research is coutry specific, targeting only Lithuanians. What would you suggest to do with data I mentioned? I have some kind of book about datamining for infosec. So one good example of infosec learning machines would be ESET Smart Security it's monitoring every single movement on system and sometimes is capable to detect new malware that havent been put on signature database yet. Not sure how I could use facebook data for machine learning yet. Then it's better to use surveillance bots and to use their data on the run instead of storing it, so it would be possible to learn from human behavior. Need to read about those Social Bots, I heard that twitter is full of them.

cauerego commented 8 years ago

How about we use discord app for taking? That way we can have a quicker at asynchronous conversation (because nobody else will be on the irc-like channel, probably) and jump in voice calls whenever we happen to be both online.

With all that I just mean: we should Skype, dude! ;P

On Wed, 9 Dec 2015 01:46 v32itas notifications@github.com wrote:

Probably you're right. But thing is that whole system gonna have a lot of bots for different kind of automation, mainly for OSINT gathering and filtering using predefined params. Some for surveillance inside our intranet and some for intrusion detection. So in case of people profiling I'm ok with just my own manual ways and analysis ATM. So yeah it's not much to learn from hand picked HQ people profiles. But I think that there will be much to learn from how our system works itself. Currently I'm not thinking about BIG DATA. I'm thinking about synthetic agents filtering RSS Feed's, emails and all known sources of information looking for serious articles, books and documents. Sorting everything by categories and autopublishing on the right places on our social networks profiles. Synchronizing Social network events and importing them into our platform auto adding users on our groups and pages as contacts into huge adress books.... There will be a lot of action inside our platform. As I said before I'm not a programmer and I'm noob in Data Mining stuff. I will learn that stuff eventually, but ATM I'm busy with system administration and improvement. I have few student's and they just started with basic information gathering. Currently I'm only capable for datamining or hq data extraction from hand selected sources. And yes I have no idea how to store data, because I have no Idea how I'm going to use that data later. I'm talking about giant user lists with only name lastname and profile ID and giant information feeds available via Facebook API. This research is coutry specific, targeting only Lithuanians. What would you suggest to do with data I mentioned? I have some kind of book about datamining for infosec. So one good example of infosec learning machines would be ESET Smart Security it's monitoring every single movement on system and sometimes is capable to detect new malware that havent been put on signature database yet. Not sure how I could use facebook data for machine learning yet. Then it's better to use surveillance bots and to use their data on the run instead of storing it, so it would be possible to learn from human behavior. Need to read about those Social Bots, I heard that twitter is full of them.

— Reply to this email directly or view it on GitHub https://github.com/v32itas/systems-thinking/issues/2#issuecomment-163079994 .

---- Caue Rego +1 732 7377346 www.cregox.com