opencog / docker

Docker containers for OpenCog - Robot Operating System (ROS)
Other
76 stars 73 forks source link

indigo/ros-opencog fails to build because CMake is out of date #167

Closed mwigzell closed 2 years ago

mwigzell commented 2 years ago

Hey guys, I"m not yet understanding how I might upgrade the version of CMake used. If I run the build.sh I see this error:

CMake Error at CMakeLists.txt:19 (CMAKE_MINIMUM_REQUIRED): CMake 3.0 or higher is required. You are running version 2.8.12.2

mwigzell commented 2 years ago

Ok, so investigation of the ros-base/Dockerfile shows it is FROM Ubuntu: 14.04, and down the line in ros-opencog/Dockerfile the apt-get: RUN apt-get -y install gcc g++ cmake binutils-dev libiberty-dev \ libboost-dev libboost-date-time-dev libboost-filesystem-dev \ libboost-program-options-dev libboost-regex-dev \ libboost-serialization-dev libboost-system-dev libboost-thread-dev \ cxxtest

is bringing in "cmake". So the issue is that somehow the surrounding packages have moved on, but the version from ubuntu's 14.04 container is pulling in version 2.8.12.2 which is too old. How to proceed?

mwigzell commented 2 years ago

I have impression that easiest way forward would be to bump the Ubuntu version. Perhaps a better fix would be to use an Arch container. At least this kind of issue wouldn't happen. But ultimately if the apis used deprecate certain features, then the opencog stuff would break. Either way is not going to work forever I guess. Its disappointing that the docker mechanism did not yield a more robust snapshot of the basic dependencies needed to run.

mwigzell commented 2 years ago

Ok, so I tried to use Ubuntu's "xenial" release, but no joy: ROS needs "trusty". It would appear I need to "manually" install a version of "cmake" that is not shipped with "trusty" though: cogutil cmake configuration is requiring a cmake version of > 3.0 which is not in "trusty".

linas commented 2 years ago

Hi Mark.

Yes, your intuitions are correct. So:

You are bumping into the issue that no one is using these containers any more, and so they are not being maintained. I do think they are notable, and worth maintaining, as they are not a bad place to start for building things .... just that ... no one is doing anything with them at this time.

If you submit pull requests to modernize things, I'll merge them.

mwigzell commented 2 years ago

Hi Linas,

I got a little further: there is a "cmake3" for trusty. But when I proceed with that, it stumbles on GCC:

Step 33/42 : RUN (mkdir /opencog/cogutil/build; cd /opencog/cogutil/build; cmake ..; make -j6; make install) ---> Running in eb2286a9b785 -- The C compiler identification is GNU 4.8.4 -- The CXX compiler identification is GNU 4.8.4 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Build type: Release CMake Error at cmake/OpenCogGccOptions.cmake:8 (MESSAGE): GCC version must be at least 7.0!

So it looks like an uphill battle. Is it worth it to continue? I myself was pursuing this as a way to come up to speed on things, but not being able to run the demos is disheartening. What are people working on then? Is there an alternative to Eva in a docker container? Do we no longer use ROS? I understood that there is a networking/port issue between docker and ROS but that is probably history now. Was it resolved? Cheers, Mark

On Sun, Jan 23, 2022 at 12:29 PM Linas Vepštas @.***> wrote:

Hi Mark.

Yes, your intuitions are correct. So:

  • The base should be updated to run xenial ... but, as you note:
  • indigo is a specific release of ROS, and I guess that maybe trusty is the last release that was supported by indigo.
  • Thus: two changes are needed: we need an indigo-base that uses trusty, and we need a current-base that is 20.04 or similar, plus a port of the indigo containers to the latest ROS version, which seems to be Noetic Ninjemys.

You are bumping into the issue that no one is using these containers any more, and so they are not being maintained. I do think they are notable, and worth maintaining, as they are not a bad place to start for building things .... just that ... no one is doing anything with them at this time.

If you submit pull requests to modernize things, I'll merge them.

— Reply to this email directly, view it on GitHub https://github.com/opencog/docker/issues/167#issuecomment-1019560955, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5DUK64YYEH4Z6LMR4RXG3UXRQJ7ANCNFSM5MSH5KIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

linas commented 2 years ago

Is it worth it to continue?

Maybe

What are people working on then?

Ben Goertzel is working on singularity.net, an effort to bring an AI tool platform for the Enterprise. Kind-of an open-source version of the assorted AI tools that Microsoft, IBM and Amazon offer. All that within a crypto marketplace (so you can buy/sell AI services). He is also funding some basic research, but it's unclear to me what is being done there.

David Hanson continues to make Hanson Robotics a profitable enterprise, and that means taking advice from those people who insist that the only good source code is closed-source. People such as VC's and other investors who think they can profit from selling the company (and the closed-source in it) to someone bigger. So he's not using this framework any longer. Might still be using ROS, but I don't know.

One of the more active opencog projects is "rocca" - "Rational OpenCog-Controlled Agent" which uses minecraft for embodiment. Nil Geiswiller is leading that project, several people are helping.

I'm doing basic research on symbolic learning at https://github.com/opencog/learn I'm getting good (great) results, but since there is no whiz-bang demo to impress the crowd, no one notices or cares. I'm unable to get funding or a spot in the lime-light, so mostly, I'm busy getting poor while leading an obscure life. I've concluded it is better to do something useful and good for no money, rather than something pointless and depressing for a salary. So I'm part of the Great Resignation I guess. It's not an economically sound decision, but the choices are bleak, either way.

I think resurrecting the ROS interfaces is a worthwhile task. You should do it if at all interested. However, after this, the question arises: "great, now what?" because you'll have a body, but no brain. You'll have basic sensory input (vision, sound) basic output (a good-quality face animation) but... then what? Well, wiring up some kind of simulacrum is hard. Well, not that hard - the game-AI people have lots of NPC's and Eva in her heyday was a good working NPC. Converting this puppet show into something that learns and reasons is ... well, like I said, I'm doing basic research on learning and reasoning.

Is there an alternative to Eva in a docker container?

The Eva code base always ran just fine outside of containers (although it has bit-rotted. I can provide some advice in resuscitating it, if you are interested.)

understood that there is a networking/port issue between docker and ROS

Docker was/is quirky. There was a work-around/fix for that. Personally, I use LXC for everything.

mwigzell commented 2 years ago

Hi Linas, yes, I'll try to get ROS back up. I already have the "ros-base" running on ubuntu:20.04. I'll see about ros-opencog soon. For me, it is all about trying to understand what you brains cooked up and are cooking up. I'm a technologist, but not a scientist. (I don't deal with mathematics theory, but love to tinker with it if its been put into a library.) Of course, I'm very interested in all things AI, to the extent I can understand them. Sorry to hear you are not being remunerated for your efforts, and have no recognition. I do know about singularityNet and Hansen Robotics a little. I joined SophiaDAO. I'll look into "rocca" when I get the chance. I was just wanting to be able to run the darn Eva demo to see the various parts involved, and how they work. I like the ideas in "atomspace" so far. I have no idea whether it is theoretically a viable way of representing AGI state, as I think Ben is hoping to use it for.

Yes, thanks, I'd like any help you can give me to get EVA working if I ever get that far. I'll try and set up some pull requests for my updates. I think basic research on learning and reasoning is what's needed still, so I'm glad you're doing it!

Me, I'm coming to the end of my salaried career soon. Its boring. Waste of time. Thats why I'm looking into AI on the side. I find, the old programming urge is still there if the task is worthy. I don't have a vision of how AGI might be achieved. I hope you brains do. Obviously, having a kind of thing like atomspace is a step in the right direction. I think that an AGI might be a reachable goal. It probably needs to be "fast trained" to get it up to scratch once its been created.

I suppose thinking of it like we think of a single entity is probably wrong? Silicon based intelligence has the fundamental advantage of working in parallel. So wouldn't it be more like a sort of Borg mind?

Well, its nice to have this exchange with you. I'll post back when I'm ready. Cheers --Mark

On Sun, Jan 23, 2022 at 7:05 PM Linas Vepštas @.***> wrote:

Is it worth it to continue? Maybe

What are people working on then?

Ben Goertzel is working on singularity.net, an effort to bring an AI tool platform for the Enterprise. Kind-of an open-source version of the assorted AI tools that Microsoft, IBM and Amazon offer. All that within a crypto marketplace (so you can buy/sell AI services). He is also funding some basic research, but it's unclear to me what is being done there.

David Hanson continues to make Hanson Robotics a profitable enterprise, and that means taking advice from those people who insist that the only good source code is closed-source. People such as VC's and other investors who think they can profit from selling the company (and the closed-source in it) to someone bigger. So he's not using this framework any longer. Might still be using ROS, but I don't know.

One of the more active opencog projects is "rocca" - "Rational OpenCog-Controlled Agent" which uses minecraft for embodiment. Nil Geiswiller is leading that project, several people are helping.

I'm doing basic research on symbolic learning at https://github.com/opencog/learn I'm getting good (great) results, but since there is no whiz-bang demo to impress the crowd, no one notices or cares. I'm unable to get funding or a spot in the lime-light, so mostly, I'm busy getting poor while leading an obscure life. I've concluded it is better to do something useful and good for no money, rather than something pointless and depressing for a salary. So I'm part of the Great Resignation https://en.wikipedia.org/wiki/Great_Resignation I guess. It's not an economically sound decision, but the choices are bleak, either way.

I think resurrecting the ROS interfaces is a worthwhile task. You should do it if at all interested. However, after this, the question arises: "great, now what?" because you'll have a body, but no brain. You'll have basic sensory input (vision, sound) basic output (a good-quality face animation) but... then what? Well, wiring up some kind of simulacrum is hard. Well, not that hard - the game-AI people have lots of NPC's and Eva in her heyday was a good working NPC. Converting this puppet show into something that learns and reasons is ... well, like I said, I'm doing basic research on learning and reasoning.

Is there an alternative to Eva in a docker container?

The Eva code base always ran just fine outside of containers (although it has bit-rotted. I can provide some advice in resuscitating it, if you are interested.)

understood that there is a networking/port issue between docker and ROS

Docker was/is quirky. There was a work-around/fix for that. Personally, I use LXC for everything.

— Reply to this email directly, view it on GitHub https://github.com/opencog/docker/issues/167#issuecomment-1019668730, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5DUKYM736XXFCPSUOGXQDUXS6XBANCNFSM5MSH5KIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

mwigzell commented 2 years ago

Hi Linas, I ran into an issue building opencog/.relex: error: package org.slf4j does not exist was there a fix for that? I imagine it needs to fetch it and put it on the class path? --Mark

On Sun, Jan 23, 2022 at 7:05 PM Linas Vepštas @.***> wrote:

Is it worth it to continue? Maybe

What are people working on then?

Ben Goertzel is working on singularity.net, an effort to bring an AI tool platform for the Enterprise. Kind-of an open-source version of the assorted AI tools that Microsoft, IBM and Amazon offer. All that within a crypto marketplace (so you can buy/sell AI services). He is also funding some basic research, but it's unclear to me what is being done there.

David Hanson continues to make Hanson Robotics a profitable enterprise, and that means taking advice from those people who insist that the only good source code is closed-source. People such as VC's and other investors who think they can profit from selling the company (and the closed-source in it) to someone bigger. So he's not using this framework any longer. Might still be using ROS, but I don't know.

One of the more active opencog projects is "rocca" - "Rational OpenCog-Controlled Agent" which uses minecraft for embodiment. Nil Geiswiller is leading that project, several people are helping.

I'm doing basic research on symbolic learning at https://github.com/opencog/learn I'm getting good (great) results, but since there is no whiz-bang demo to impress the crowd, no one notices or cares. I'm unable to get funding or a spot in the lime-light, so mostly, I'm busy getting poor while leading an obscure life. I've concluded it is better to do something useful and good for no money, rather than something pointless and depressing for a salary. So I'm part of the Great Resignation https://en.wikipedia.org/wiki/Great_Resignation I guess. It's not an economically sound decision, but the choices are bleak, either way.

I think resurrecting the ROS interfaces is a worthwhile task. You should do it if at all interested. However, after this, the question arises: "great, now what?" because you'll have a body, but no brain. You'll have basic sensory input (vision, sound) basic output (a good-quality face animation) but... then what? Well, wiring up some kind of simulacrum is hard. Well, not that hard - the game-AI people have lots of NPC's and Eva in her heyday was a good working NPC. Converting this puppet show into something that learns and reasons is ... well, like I said, I'm doing basic research on learning and reasoning.

Is there an alternative to Eva in a docker container?

The Eva code base always ran just fine outside of containers (although it has bit-rotted. I can provide some advice in resuscitating it, if you are interested.)

understood that there is a networking/port issue between docker and ROS

Docker was/is quirky. There was a work-around/fix for that. Personally, I use LXC for everything.

— Reply to this email directly, view it on GitHub https://github.com/opencog/docker/issues/167#issuecomment-1019668730, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5DUKYM736XXFCPSUOGXQDUXS6XBANCNFSM5MSH5KIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

linas commented 2 years ago

org.slf4j

This is a logging-for-java package, I forget the full name. It might even be the one making the news headlines a few weeks ago. All of the scripts and makefiles and whatnot should already have the classpaths set up, so not sure what the root cause of the error is.

The relex README.md says:

linas commented 2 years ago

For me, it is all about trying to understand what you brains cooked up and are cooking up.

Plot reveal: turns out the architecture is follow-your-nose fairly straight-forward stuff.

I like the ideas in "atomspace" so far. I have no idea whether it is theoretically a viable way of representing AGI state,

It is. To demystify things a bit: super-popular these days is neural-nets & deep learning, and if you look at what is actually being done, you find that the neural nets generate giant vectors and matrices of floating-point numbers, so if you want to "store that knowledge", you just have to store a bunch of floats, which is not theoretically challenging (there's a grab-bag of practical details, like disk layout, memory access, performance, etc. Standard engineering issues.)

There's also the general realization/observation that humans work "symbolically", with concepts like "cat" "dog" "chair" "table" so you have to be able to work with and store relationships like that too. A prehistoric way of storing this is with a "relational DB" aka SQL. In the early medieval times, Martin Luther nailed a tract about NoSQL to the church doors of Oracle and IBM. Fast-forward a few centuries and one has denominations of all kinds of "graph databases". Which, oddly, use a modified SQL as their query language. How weird is that?

The AtomSpace is a kind-of graph-database, which can store large blobs of floating-point numbers, and also graphs, and as a bonus has lots of other nice features, including a query language that is more powerful than conventional SQL (which is possible because the joins are done under the covers, and so vast amounts of complexity are hidden.) So, yes, the AtomSpace & Atomese are the generally-correct direction for storing data that comes in a variety of formats.

I don't have a vision of how AGI might be achieved. I hope you brains do.

Yes, I do. One step at a time. At the lowest level layers, take a look at the "learn" project. I'm excited cause I think I can unify vision, sound, language and symbolic processing all in one, while still having many/most of the deep-learning concepts embedded/preserved (although from an unusual angle.)

Me, I'm coming to the end of my salaried career soon.

It's hard/impossible to "build software correctly" until you have experience of doing it wrong a bunch of times. And that only comes after a long career. So its very valuable experience. And often underestimated. So I'd kind-of prefer that, to grad students who get the concepts but are terrible programmers.

AGI ... It probably needs to be "fast trained" to get it up to scratch once its been created.

If only. First, lets ponder some historical examples. The theory for neural nets can be written down in half-a-page or a page, and yet a fast, usable implementation with a nice API needs zillions of lines of code running on GPU's, and to scale it to the cloud needs even more. it's hundred-billion-dollar industry by now. Before that, say ... SQL -- the theory for SQL is more like 20 pages, its pretty complex, but 20 pages is tiny compared to the lines of code and the money involved. Another example: compilers: Yikes! compiler theory is more like 50 or 100 pages minimum, but we do have excellent compilers.

AGI theory is at a minimum as complicated as all those things put together. Even when there is a clear vision, its a lot of work to write the code, debug it, run it, crunch the data, figure out why the data crunched wrong, rinse and repeat. It's not like its going to just magically work the minute you're done coding. There's vast amounts of bring-up and experimentation.

So wouldn't it be more like a sort of Borg mind?

Oh, you mean facebook and tiktok? Where everyone has a cell-phone bolted to their skulls, wired directly to their hippocampus? Yes, AGI will be connected to that. We already have proto-AGI working at that scale, its called "algorithmic propaganda" and there were some unhappy Congressional investigations and disturbing Zuckerberg memes surrounding it. The future will be turbulent.

mwigzell commented 2 years ago

lol, thanks for all that! It does give me perspective. I will check out that "learn" project when I get a moment.

mwigzell commented 2 years ago

Hi Linas, well I have built ros-opencog and ros-blender. But I have trouble running blender: root@011adca7c645:/catkin_ws# blender Error! Unsupported graphics card or driver. A graphics card and driver with support for OpenGL 3.3 or higher is required.

I already avoided an initial error from LibGL, by setting LIBGL_ALWAYS_INDIRECT=1. This seems to be the real issue that the docker container probably needs a modern graphics driver. It was trying to use "swrast", but the export above quelled that. (My graphics card supports OpenGL 4.6 so I'm not worried about the graphics card.) Any ideas?

mwigzell commented 2 years ago

Hmm, I was thinking wrongly: inside the container we just need to run the X client. So it should connect via the local sockets to the X server. I tried this approach with good ol' "xeyes" and it works! Was fun to see that again. So now I'm trying to recall what would prevent "blender" from working, and "glxgears" too. They must be built with something that is side-stepping the old-style X.

mwigzell commented 2 years ago

I should note that without the above mentioned LIBGL_ALWAYS_INDIRECT running blender looks like this: root@041bb1e0e052:/catkin_ws# blender libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast Error! Unsupported graphics card or driver. A graphics card and driver with support for OpenGL 3.3 or higher is required.

mwigzell commented 2 years ago

So I get it. Blender doesn't support network rendering anymore. It would need wrapping which is complicated. sigh.

linas commented 2 years ago

In order to get good rendering performance (at least, back in the good old days) OpenGL always did direct rendering, talking straight to the graphics card, bypassing X11.

I don't really recall how we did blender in the docker container, but I have a very vague memory that it was running in direct rendering mode. I could be wrong, but the reason I seem to remember this is because there were some special docker flags that had to be declared to get this to work .... although those docker flags should be in the github scripts or readmes, so it should not be totally mysterious.

Now, I'm 99.9% certain that docker supports direct rendering, because the deep learning neural net industry needs to talk directly to the graphics card, to get to the GPU's to send in the CUDA code that they need to run. It would not make sense for docker to tell the neural net guys "sorry no can do".

Search for "docker direct rendering". There might be some fiddly bits that have to be set just so in some arcane config file.

mwigzell commented 2 years ago

Hi Linas, I just submitted a pull request, it gets eva-ros almost there. From my reading Blender was using X11 to draw but there was some flag magic to help it use acceleration via MITSHM. It was however still vendor neutral. But from 2.80 up, the Blender people switched away from X11 to Wayland which is totally direct. I don't run Wayland here, I'm running XFCE desktop on X11. As you write, the GPU guys want to use CUDA.

For the "docker" project that means we would need to run the whole thing in a VM with a Wayland based distro, having specific NVidia support I think. (The OpenGL must be from Nvidia 'cause its going through their GLVND dispatcher. That would be one configuration. Then we'd have to have a different VM to target the non-Nvidia world too.

But at least Blender 2.79 is coming up, and its running Ubuntu 20.04, with the newest opencog integrated. I just need to get those eva containers to play ball, and find some more of that old code. And its using the ROS "noetic" release.

Anyway, let me know your thoughts, esp. how I could get that "eva-ros" container to run, and where are the Eva-1.png files referenced from? Cheers --Mark

On Thu, Jan 27, 2022 at 12:46 PM Linas Vepštas @.***> wrote:

In order to get good rendering performance (at least, back in the good old days) OpenGL always did direct rendering, talking straight to the graphics card, bypassing X11.

I don't really recall how we did blender in the docker container, but I have a very vague memory that it was running in direct rendering mode. I could be wrong, but the reason I seem to remember this is because there were some special docker flags that had to be declared to get this to work .... although those docker flags should be in the github scripts or readmes, so it should not be totally mysterious.

Now, I'm 99.9% certain that docker supports direct rendering, because the deep learning neural net industry needs to talk directly to the graphics card, to get to the GPU's to send in the CUDA code that they need to run. It would not make sense for docker to tell the neural net guys "sorry no can do".

Search for "docker direct rendering". There might be some fiddly bits that have to be set just so in some arcane config file.

— Reply to this email directly, view it on GitHub https://github.com/opencog/docker/issues/167#issuecomment-1023624508, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5DUK7FL5EUTSHPEXCGM2LUYGVMBANCNFSM5MSH5KIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

linas commented 2 years ago

Blender-2.79 should be fine. We're not authoring, so the latest and greatest is not needed.

I don't recall any need for any png files, except in the documentation. What is Eva-1.png needed for?

The only "old code" that I recall was some ROS-to-opencog shims, (written in python) that someone moved from it's original git repo to another git repo. I think I updated the README in the original repo to indicate the new location. In my memory, this would be the biggest "old code" issue, and would be fixed by finding the new repo, and rewiring it up to there.

linas commented 2 years ago

The missing "old-code" repo was called "ros-behavior-scripting" -- the git repo still exists I think but has been gutted. I mention this because it or its replacement will be needed, and I did not see it in the docker files as I scrolled through them.

mwigzell commented 2 years ago

Hi Linas, I will address all those concerns mentioned in your comments. Thanks for looking through the code and merging it. I don't want to proceed with any of the new structural changes however, until I have got the "eva-ros" container totally working. Right now, as it stands, we are missing some libs, and Blender is coming up empty. The 3 missing libs are: ros-noetic-openni-camera ros-noetic-mjpeg-server ros-noetic-dynamixel-msgs

I did find something here: https://github.com/ros-drivers/openni_camera but it feels like I'm picking through the junk yard.

1) Do you know where I can/should get them from? (obviously they were originally "indigo" libs, but apt-get cannot locate them. 2) I used the github "linas" repositories for pi_vision, perception, blender_api_msgs and blender_api because the opencog ones were not found. Or did I miss something?

Cheers --Mark

On Fri, Jan 28, 2022 at 3:38 AM Linas Vepštas @.***> wrote:

Blender-2.79 should be fine. We're not authoring, so the latest and greatest is not needed.

I don't recall any need for any png files, except in the documentation. What is Eva-1.png needed for?

The only "old code" that I recall was some ROS-to-opencog shims, (written in python) that someone moved from it's original git repo to another git repo. I think I updated the README in the original repo to indicate the new location. In my memory, this would be the biggest "old code" issue, and would be fixed by finding the new repo, and rewiring it up to there.

— Reply to this email directly, view it on GitHub https://github.com/opencog/docker/issues/167#issuecomment-1024133673, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5DUKYAMSDBWDD2URTTPH3UYJ53XANCNFSM5MSH5KIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

linas commented 2 years ago

OK, well, this brings us to the wild frontier.

These packages

ros-noetic-openni-camera ros-noetic-mjpeg-server

along with pi_vision (which itself seems to be stale, deprecated if not outright obsolete) were used to build a face detector. In short, Eva could see one (or more, or zero) faces in front of her, and direct her gaze at what she was seeing. If there were two faces, she would alternate attention between them; if there were none, she'd pout unhappily and eventually fall asleep. There was another plug-in (lost to the winds) that could recognize certain specific people, so that she could engage them by name. This whole pipeline was rather shaky: the face-detection in pi_vision never worked all that well -- failed in bad lighting, or if you turned and showed quarter face, or cocked your head. Standing in front of a bright window ruined everything. Basically, worked only in soft office lighting, and nowhere else. Trying to rescusitate this old code is not a worthy task. Replacing it whole-sale with something modern and much more flexible and accurate would be the right thing to do. You wanted to learn about the architecture of the system: well, there you go!

dynamixel-msgs

Dynamixel is a specific brand of motor controller. Insofar as there are no motors involved here, this was sucked in as a dependency only because someone was too lazy to stub out some code somewhere. Ditch that dependency, and if it causes problems somewhere, stub out those parts. If you see the words "motor safety", this was about making sure the plastic skin of the robot head would not tear, by making sure the motors did not move past those fixed points. This subsystem can be scrapped, as it is particular to a specific (and presumably obsolete) model.

because the opencog ones were not found.

Well, that's ... odd! I'll look into that.

linas commented 2 years ago

One more comment:

Replacing the vision system whole-sale with something modern

You have four choices here:

1) Build a new face-tracking system, much like the old one. If properly calibrated for camera angle and distance-to-monitor, the blender face will turn to look at you in a rather realistic fashion: it really does look like she's looking at you from the monitor. (Her eyes will even do a depth-of-field thing, instead of staring off at infinity. That is, she'll focus to the right depth, too.)

2) integrate the above with Zoom or something like that: That way, anyone can zoom-chat with her. I always wanted to do this, but no one else seemed to be into it all that much.

3) Build a vision system that can process much more than just faces, but also see other things. For the robot, this would have been crowds and trade-show-floor chaos. But since you won't be dragging your desktop webcam into the great outdoors .... this doesn't make much sense.

4) Do basic research into vision. I'm trying to do that at https://github.com/opencog/learn which would integrate vision, sound and text (and other senses) but its nothing more than a sketch at this time. But I am serious on working on this.

mwigzell commented 2 years ago

Got it. I'll use the framework we have, replace the vision / perception system. working towards putting in atomspace, morph eva-owyl to eva-atom?

On Fri, Jan 28, 2022 at 9:29 AM Linas Vepštas @.***> wrote:

OK, well, this brings us to wild frontier.

These packages

ros-noetic-openni-camera ros-noetic-mjpeg-server

along with pi_vision (which itself seems to be stale, deprecated if not outright obsolete) were used to build a face detector. In short, Eva could see one (or more, or zero) faces in front of her, and direct her gaze at what she was seeing. If there were two faces, she would alternate attention between them; if there were none, she'd put unhappily and eventually fall asleep. There was another plug-in (lost to the winds) that could recognize certain specific people, so that she could engage them by name. This whole pipeline was rather shaky: the face-detection in pi_vision never worked all that well -- failed in bad lighting, or if you turned and showed quarter face, or cocked your head. Standing in front of a bright window ruined everything. Basically, worked only in soft office lighting, and nowhere else. Trying to rescusitate this old code is not a worthy task. Replacing it whole-sale with something modern and much more flexible and accurate would be the right thing to do. You wanted to learn about the architecture of the system: well, there you go!

dynamixel-msgs

Dynamixel is a specific brand of motor controller. Insofar as there are no motors involved here, this was sucked in as a dependency only because someone was too lazy to stub out some code somewhere. Ditch that dependency, and if it causes problems somewhere, stub out those parts. If you see the words "motor safety", this was about making sure the plastic skin of the robot head would not tear, by making sure the motors did not move past those fixed points. This subsystem can be scrapped, as it is particular to a specific (and presumably obsolete) model.

because the opencog ones were not found.

Well, that's ... odd! I'll look into that.

— Reply to this email directly, view it on GitHub https://github.com/opencog/docker/issues/167#issuecomment-1024444971, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5DUK5DTZFPLRA5OYPKAHTUYLG6DANCNFSM5MSH5KIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

linas commented 2 years ago

The opencog repos are still there, but they were out of date. I synced them up. It would be best if these were used.

linas commented 2 years ago

eva-owyl

No, that code is just plain dead. It was ported to opencog, long ago. A part of it resides in

https://github.com/opencog/ros-behavior-scripting

which I guess is just the opencog <--> ROS shims. The actual behavior scripts are in

https://github.com/opencog/opencog/tree/master/opencog/eva

Stuff you need to know:

1) this should be split out to it's own git repo, but that would be kind-of hard, just right now.

2) The behaviors are written in "raw Atomese". Now, raw Atomese is terribly low-level; its not human-friendly, its more like programming in assembly code. (It's meant to be like that: other algos actually need this low-level interface; just that, for human programmers, its tough.)

3) To partly alleviate the above, something called "ghost" was created. It is modeled on ChatScript (and is faux-compatible with it) ChatScript is a chatbot system. Ghost was supposed to include directives for moving face and arms (and generic robotic stuff) but I don't think that was ever done. ☹️

4) There's a git repo called "ghost-loving-ai" that has some dialogs to allow Eva to lead a meditation session. It was also supposed to visually mirror the face that the robot was seeing (so if you frown, so does Eva, and so on.) I've been lead to believe it worked, but never saw a working demo myself, so I dunno.

So again, this is the wild frontier. You can pick up pieces-parts and wire them up and try to get them to do stuff.

Some comments about a long-term vision (and about "opencog" vs "atomspace".)

linas commented 2 years ago

Anyway, since the pi_vision repo has now ripped out the openni and the ros-mjpeg packages, maybe it is not so hard to get running again. It might be the easiest path to getting a working demo where the face tracking works.

mwigzell commented 2 years ago

Hi Linas Thanks! I'll check all that out too. Whew! I'm a bit overwhelmed, I just need some time to figure how to ask the questions I need to ask, and answer them. I found this with a bunch of stuff, the guy forked it:

https://github.com/vytasrgl/blender_api.

It has the Eva.blend file, whereas the one on your repository did not.

I succeeded in building pi_vision etc from opencog.

I agree, I'm beginning to narrow my focus to pi_vision for now.

Regarding your "learn" project, I'm interested, but am happy to delay gratification until I have come up to speed some more.

One question: was the port mapping issue between Docker and ROS fixed? Or is that an ongoing issue? It does seem awkward having same app in multiple containers. --Mark

On Fri, Jan 28, 2022 at 11:05 AM Linas Vepštas @.***> wrote:

Anyway, since the pi_vision repo has now ripped out the openni and the ros-mjpeg packages, maybe it is not so hard to get running again. It might be the easiest path to getting a working demo where the face tracking works.

— Reply to this email directly, view it on GitHub https://github.com/opencog/docker/issues/167#issuecomment-1024522098, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5DUK26LNZJGUR7X6XAILDUYLSI7ANCNFSM5MSH5KIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

linas commented 2 years ago

vytasrgl is Vytas Krischiunas, he's the one who set up all this stuff originally. (Except for the blender model, that was created by an artist who knew how to sculpt faces in Blender.) So we forked him rather than v.v.

The blend file you found is old and out of date. Don't use it, and it does not belong in that repo, anyway. The blender API contains ONLY the python code that acts as an API to blender; it does NOT contain the blender model itself, that's in the other blender repo. The idea was to be modular, with the API distinct from specific blender models (as there were several/many of these.) The best and greatest Eva model should be called something like Eva2.blend or similar, it was touched up by mthielen (Mark Thielen) who fixed assorted lighting bugs, camera bugs, etc. but also altered some of the skin tone textures in maybe unfortunate ways. I don't recall. There was some less-than-ideal versions in there. Also, some of them worked for some versions of blender, but not others. The blender folks weren't exactly binary-compatible with their own file format. All of these files should be in the opencog repos, somewhere. If they are not showing up, I can do some archeology and find them.

I'm a bit overwhelmed,

Yeah. Sorry. At least now you understand why the docker files were bit-rotted. The architecture is complex, and the sand kept shifting under our collective feet.

linas commented 2 years ago

Oh, and warning: Some of the blender files have different "bones" in them, or at least, different names for different bones, and as a result, the python API does not know how to find them. Which means that some of the animations might not work with some of the files, until you hand-edit either the blender model, or the python code to use the correct names and send the correct messages. Some messages in different models take more or fewer parameters, e.g. how fast she should blink.

Testing is "easy" -- you try each of the named animations one at a time - smile, blink, surprised, lookat xyz gazeat xyz (look turns her neck, gazes move only the eyes) and you see which ones failed to work. To actually fix whatever didn't work, you have two choices: (1) find a different blend file that works, or (2) prowl around in the guts of the blender model, reading the python code, and seeing which strings fail to match up. It's "not that hard" but would take a few days of exploring the zillions of menus that blender has, to find the menu that is hiding the animations. And then figuring out how to dink with it. I'm pretty sure that the Eva2.blend file was fully compatible with the python API wrappers. i.e. everything worked.

Also, some of the models might be missing lip animations. The good models have about 7 or 9 different animations for different sounds: m (lips pursed), p (plosive), o (big open round mouth), k (a smiling-like mouth), etc. These can be mapped to various phonemes, which are generated by at least one of the open-source text-to-speech libraries (I think maybe CMU or flite or MaryTTS, I forget which)

Sorry to flood you with all these details: I mean mostly to provide a flavor of what worked, can be made to work without writing new code, just by re-attaching whatever got deattached. I'm hoping this serves as a motivator, not a demotivator.

linas commented 2 years ago

port mapping issue between Docker and ROS fixed?

There was an adequate work-around. The extended belly-aching was an attempt to shame the docker developers into fixing a docker bug, but they were pretty shameless and wandered off in some wacky direction. Corporate brain-freeze. Which is one reason why I use LXC these days, and why docker has fallen on hard times: their vision did not align with the vision of their users. (ROS was not entirely blameless; they hadn't quite thought things through, at that point. I presume that the latest and greatest ROS has all the latest and best networking, but I dunno.)

mwigzell commented 2 years ago

Hi LInas. Thanks for all that info (last 2 emals as well).

1) There is a Sophia.blend in the opencog/blender_api, same as the Eva.blend in that of Vytas Krischiunas. So far I didn't find the "other repo" with all the blend files. The way the code is written today, the imported file from eva-ros/scripts/eva.sh starts blender inside the container like so:

tmux new-window -n 'eva' 'cd /catkin_ws/src/blender_api && blender -y
Eva.blend -P        autostart.py; $SHELL'

So you see that someone intentionally has altered the delivery mechanism? Any idea where those Eva2.blend files are?

2) Please do continue to flood me with those details, they are very useful for me to get a picture, as you intended.

3) So the original issue with Docker is still there? Perhaps I can switch to LXC eventually? Yeah, "corporate brain-freeze", tell me about it. I got the picture that the issue is that Docker won't let Ros Nodes be structured as they should? And LXC will?

linas commented 2 years ago

Eva.blend

If it works, it works! I looked around; I have various copies, and I now believe the stuff in git is the stuff that was the best & finest.

LXC

So, LXC, and its extensions LXD, LXE are container systems (so, like docker) but also very different from docker, in philosophy. Some of the differences are:

So its the same base technology (linux containers) but a different vision for how they're used & deployed.

BTW, LXD and LXE make more sense if you have a cloud of hundreds or thousands of machines, I guess. For single-user scenario, like me, LXC is simpler.

mwigzell commented 2 years ago

Hi Linas, ok, I'll go with what we have in git for the ???.blend files. Yes, I have been diverted to LXC in order to build an Ubuntu 20.04 container to try out this stuff in: arch install of ROSS fails currently due to catkin_pkg vs catkin-pkg I think. So, LXC has been fun, it has a bug tho: when trying to run apt-get install in a snap shotted LXC container bridged to host, I get "Invalid cross-device link" errors. So right now, I'm a little flummoxed. I just finished reporting the LXC issue, I think from other posts it is in kernel, but it seems to be a recurring issue, the last report was 2018. Cheers --Mark

On Tue, Feb 1, 2022 at 11:38 AM Linas Vepštas @.***> wrote:

Eva.blend

If it works, it works! I looked around; I have various copies, and I now believe the stiff in git is the stuff that was the best & finest.

LXC

So, LXC, and its extensions LXD, LXE are container systems (so, like docker) but also very different from docker, in philosophy. Some of the differences are:

  • There is no install/build script, like in docker. (You can use conventional bash scripts for that, if that is what you want)
  • You can stop running containers, and when you restart them, they restart where you left off. This means that e.g. you can resume where you last left off, (without wiping out your data) and also you can migrate existing containers to machines with more RAM, CPU (or less, as needed). (whereas in docker, you always restart with a fresh, brand-new image.)

So its the same base technology (linux containers) but a different vision for how they're used & deployed.

BTW, LXD and LXE make more sense if you have a cloud of hundreds or thousands of machines, I guess. For single-user scenario, like me, LXC is simpler.

— Reply to this email directly, view it on GitHub https://github.com/opencog/docker/issues/167#issuecomment-1027215363, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5DUK5IU2B46QYAYGAH3C3UZAZC5ANCNFSM5MSH5KIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

linas commented 2 years ago

LXC container bridged to host

Don't know what that means. I usually just ssh into the container. Under ssh, it behaves just like anything else you'd log into. (more or less). If I'm desperate, then I use lxc-attach but here you have to be careful, because your bash environment might get used in the container, and your current env is probably wrong for the container. In that case, lxc-attach --clear-env is safest. ... but mostly, just use ssh.

mwigzell commented 2 years ago

I only started doing LXC last week. So maybe I missed something obvious about using it. "LXC container bridge to host" I mean that I created a network bridge between host and container and acquired IP address for the bridge using DHCP so that the container can access the internet. I'm assuming you don't have access to the internet from your container? Or maybe I am overcomplicating things? I need access to the internet from the container, since I want to install a ton of packages. Now I have: NIC --Bridge --------------------- | | | Host LXC 1 LXC 2 ... I usually do the attach as you mention, then I run "bash" once there, and it picks up the default BASH environment for the distro I'm on, in this case Ubuntu 20.04. So my purpose is to run ROS and Blender in the LXC and therefore thoroughly understand what is going on. without Docker in the way. Then I'll be able to understand how to put them into Docker, though I have to wonder: why? Why not just get everything running in the LXC and ship it like that? Python is just a pure pain. That is another reason to do this: I need to sort out Python2, Python3, Python3.5 and Python3.8 in Ubuntu. All this installing of "pip" and "pip3" that the Docker scripts are doing, makes it hard to know what is actually running. --Mark

On Tue, Feb 1, 2022 at 6:39 PM Linas Vepštas @.***> wrote:

LXC container bridged to host

Don't know what that means. I usually just ssh into the container. Under ssh, it behaves just like anything else you'd log into. (more or less). If I'm desperate, then I use lxc-attach but here you have to be careful, because your bash environment might get used in the container, and your current env is probably wrong for the container. In that case, lxc-attach --clear-env is safest. ... but mostly, just use ssh.

— Reply to this email directly, view it on GitHub https://github.com/opencog/docker/issues/167#issuecomment-1027521550, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5DUKZ2KOEIUGNXTYPIA33UZCKMTANCNFSM5MSH5KIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

linas commented 2 years ago

Bridge

Oh, OK. Yes, my containers have IP addrs; yes, they come from DHCP. I don't recall configuring this, it "just worked". Yes, they have full network access. Never used snap inside of them, and it's hard to imagine why that wouldn't work.

Why not just get everything running in the LXC and ship it like that?

Sure, one could do that. The only problems with this are:

Python

Yes, python is a pain in the neck. Avoid python2 at all costs, if at all possible. Just don't install it. Back in the day, ROS was on python2, and blender was in python3 Now, I assume ROS is fully on python3, these days? Are you saying that parts of ROS don't work with python3.8 (but do work with python3.5 or earlier?)? The Eva subsystem does not use more than half-a-dozen or so ROS bits-n-pieces, although the way it installs, everything gets sucked in. Yes, catkin is a pain in the neck, and I spent many a fine day trying to figure out why it was going haywire for me. That was then; I would like to think its all better now.

In opencog-land, I've mostly avoided the pain of using python by using scheme/guile instead. The only problem there is that most people don't want to learn scheme.

Large complex systems made out of many parts are inherently ... fragile and tough to debug. If there's a better way, I don't know what it is.

linas commented 2 years ago

This bug: https://bugs.archlinux.org/task/73591

The Invalid cross-device link error is, from what I can tell, due to making a hard link (not a soft link) across different file systems. As you may recall, soft-links are made with ln -s and can point anywhere at all, even at non-existing files. By contrast, plain ln creates another inode to the same file, and because its an inode, it must necessarily be in the same file system. Basically, hard-links are just two different names for the same file, while soft-links are pointers.

If you try to ln between two different file systems, you get the invalid cross-device link error.

Please excuse me if you know this stuff (not everyone does) If you do ls -la the number in the second column is the number of hard-links (inodes) to the actual file. So try this for fun and games:

touch a
ln a b
ln b c
ls -la
echo "ring around the roise pocket full of posie" > a
ls -la
cat c
rm a
ls -la

and you'll see the fun.

I cannot tell from your error messages what the two different file systems are. I think it's totally bizarre (and feels wayy wrong) to see ./usr/ in the paths, with a dot in front of the leading slash. That is surely not right???

Also, please set TERM to something reasonable, like TERM=xterm-256color or something like that.

linas commented 2 years ago

OH ... I see ... seems to be related to using overlayfs ... Yes, it is very tempting to use LXC with overlayfs or other overlay-style fs's. The instructions might even recommend this! Don't do it. In practice, it is not worth it.

I see it's here too: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836211 and as you noted, there is a docker variant of it at https://github.com/docker/for-linux/issues/480

mwigzell commented 2 years ago

Python: not sure what I'm saying there: thats why I want to get a container and try to be a minimalist with the install. You know how new tools are always used to re-write themselves using themselves? Like C etc. So with AGI the first thing it should do: re-create the build stack to be smooth and consistent.

On Tue, Feb 1, 2022 at 9:35 PM Linas Vepštas @.***> wrote:

Bridge

Oh, OK. Yes, my containers have IP addrs; yes, they come from DHCP. I don't recall configuring this, it "just worked". Yes, they have full network access. Never used snap inside of them, and it's hard to imagine why that wouldn't work.

Why not just get everything running in the LXC and ship it like that?

Sure, one could do that. The only problems with this are:

  • People are unfamiliar with LXC or don't want to mess with it. The usual politics of technology selection
  • The container itself will be a gigabyte in size or so. Much larger than a few dozen kbytes of docker scripts.
  • Minor risk that the container comes with some exploit installed.

Python

Yes, python is a pain in the neck. Avoid python2 at all costs, if at all possible. Just don't install it. Back in the day, ROS was on python2, and blender was in python3 Now, I assume ROS is fully on python3, these days? Are you saying that parts of ROS don't work with python3.8 (but do work with python3.5 or earlier?)? The Eva subsystem does not use more than half-a-dozen or so ROS bits-n-pieces, although the way it installs, everything gets sucked in. Yes, catkin is a pain in the neck, and I spent many a fine day trying to figure out why it was going haywire for me. That was then; I would like to think its all better now.

In opencog-land, I've mostly avoided the pain of using python by using scheme/guile instead. The only problem there is that most people don't want to learn scheme.

Large complex systems made out of many parts are inherently ... fragile and tough to debug. If there's a better way, I don't know what it is.

— Reply to this email directly, view it on GitHub https://github.com/opencog/docker/issues/167#issuecomment-1027602735, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5DUKZW5QM4R365AV3D3NDUZC7APANCNFSM5MSH5KIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

mwigzell commented 2 years ago

Yes, I wasn't paying much attention to setting up the environment just yet. I wanted to see if I could install a few things first. And no, I can't. I know what you mean with the links. I think it is coming from the Overlay FS which has a weird and wonderful structure. I have a bug in the fire, and switched to an arch LXC, which is working fine atm.

On Tue, Feb 1, 2022 at 10:08 PM Linas Vepštas @.***> wrote:

This bug: https://bugs.archlinux.org/task/73591

The Invalid cross-device link error is, from what I can tell, due to making a hard link (not a soft link) across different file systems. As you may recall, soft-links are made with ln -s and can point anywhere at all, even at non-existing files. By contrast, plain ln creates another inode to the same file, and because its an inode, it must necessarily be in the same file system. Basically, hard-links are just two different names for the same file, while soft-links are pointers.

If you try to ln between two different file systems, you get the invalid cross-device link error.

Please excuse me if you know this stuff (not everyone does) If you do ls -la the number in the second column is the number of hard-links (inodes) to the actual file. So try this for fun and games:

touch a ln a b ln b c ls -la echo "ring around the roise pocket full of posie" > a ls -la cat c rm a ls -la

and you'll see the fun.

I cannot tell from your error messages what the two different file systems are. I think it's totally bizarre (and feels wayy wrong) to see ./usr/ in the paths, with a dot in front of the leading slash. That is surely not right???

Also, please set TERM to something reasonable, like TERM=xterm-256color or something like that.

— Reply to this email directly, view it on GitHub https://github.com/opencog/docker/issues/167#issuecomment-1027618179, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5DUK4UPWXWNLEDGVA7L6TUZDC7FANCNFSM5MSH5KIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

mwigzell commented 2 years ago

Yeah.

On Tue, Feb 1, 2022 at 10:16 PM Linas Vepštas @.***> wrote:

OH ... I see ... seems to be related to using overlayfs ... Yes, it is very tempting to use LXC with overlayfs or other overlay-style fs's. The instructions might even recommend this! Don't do it. In practice, it is not worth it.

— Reply to this email directly, view it on GitHub https://github.com/opencog/docker/issues/167#issuecomment-1027621805, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB5DUKZ7X3DXBJGP4D3WOXLUZDD5PANCNFSM5MSH5KIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

mwigzell commented 2 years ago

This issue was fixed, however there are more fixes needed. Closing.