RolandHughes / ls-cs

Cross platform C++ libraries
18 stars 0 forks source link

Test ls-cs-0.1.1 on MX Linux and possibly and RPM distribution #11

Open RolandHughes opened 2 weeks ago

RolandHughes commented 2 weeks ago

Uninstall the lscs you have installed, build from scratch the ls-cs-0.1.1 branch under MX (or other non-Ubuntu Debian based) Linux, install and test.

cd /usr/bin ls *_ls

execute a few of those to ensure it is near instant start-up. I fixed a horrific architecture flaw with .conf and platform loading where those took many seconds (even on my 20-core 128GB RAM z820).

Build and test the following examples. 1 - console based Hello World program 2 - gui based Hello World program 3 - simple text editor

If this is clean feel free to merge with master, overwriting the tweaks you made there) and create an ls-cs-0.1.1 tag for master at that point, close this ticket and let me know.

If you happen to have openSuSE 15.6 or Fedora 36 running in a VM, try building an RPM. I haven't updated that script, focusing on Ubuntu 20.04 since that is predominant in medical device world. (Well, 18.04 is but . . . it's on life support.)

I hope to hear back from OpenSource legal/licensing expert with respect to which code base I can utilize for serial port support.

I am a contributor to this SerialPort library but it is Linux only which would mean rolling my own layers for Windows/Mac/BSD. Ideal case would be getting to use and heavily modify the serial port class from Qt 5.15.2. Less ideal case would be the source from the Qt Playground days of 4.8.x. Worst case, rolling my own.

I am going to have to roll my own for Parallel Port class.

You definitely need serial port when working on medical devices. Some medical devices and industrial control needs Parallel Port.

chatchoi commented 2 weeks ago

I don't think you can take code from Qt 5.15.2. You are restricted to the last version of Qt5 available in LGPLv2.1 license.

chatchoi commented 2 weeks ago

One thing I forgot to tell you. I'm using the AHS version of MX Linux. It's a bit different from the normal MX Linux, as I will always get an up-to-date graphics stack (Mesa, libdrm,...) and kernel. They are different from what is available on vanilla Debian Stable. I will not switch to the normal MX Linux because it doesn't work well on my hardware (my USB dongle will not work).

RolandHughes commented 2 weeks ago

I don't think you can take code from Qt 5.15.2. You are restricted to the last version of Qt5 available in LGPLv2.1 license.

I don't know, that's why I asked someone who deals with licenses every day. OpenSource goes all the way to 5.15.6 at least. https://download.qt.io/archive/qt/5.15/5.15.6/single/qt-everywhere-opensource-src-5.15.6.zip.mirrorlist

On the official site under single https://download.qt.io/archive/qt/5.15/5.15.15/single/

Screenshot from 2024-11-09 08-07-23

I have the original "playground" version as well.

Gospel truth I'm thinking about skipping everything Qt did with their serial port class and rolling my own. There are a few others who write drivers for custom boards interested in this. If they hop on we will definitely cut bait on the Qt serial port classes.

I understand why they used QIODevice as their base. It was just bad. It basically forces things through the main event loop. Despite QThread and all of the talk about multi-threaded, Qt is a single threaded application framework. Over reliance on a main event loop drastically limits throughput. Despite all of their wailing and admonishments about never run additional event loops and never call processEvents() the QDialog class runs its own event loop because Qtc couldn't get that academic Utopia to work properly either.

Since you are young I will share with you some wisdom from my advanced age.

Do you know why Docker containers are popular in embedded systems now?

You can't keep throwing hardware at single threaded (main event loop based) application frameworks in a battery powered world. Most x86 home hobby computer developers never went to college for a degree in Computer Science so they have no idea how to do anything right.

Enter Docker. Yet another layer of bugs and system dependencies to be introduced. Now, kids that never went to college don't have to figure out how to launch and synchronize multiple processes. The single threaded API that has existed for decades can now have multiple smaller programs written, all running in different containers.

At any rate, the single thread-y-ness of Qt is one of the reasons the serial port stuff kind of sucked. Most of the examples are all executing in the main event loop. Hey, for 1200 BAUD you can make that work on an i-5 gen-3 machine. Today we have 16-port cards capable of 460.8K bps. Some 2-port cards are 921.6 kbps. There is even a 16-port that can run 921.6 kbps on ever port.

I don't have a link for you, but for one project I was researching serial port cards that had LAN speeds.

Transmitting at 8-N-1 (using 9 bits per byte) (921.6*1024)/9 = 104857.6 bytes per second

Here is the dirty little secret kids who don't go to college for Computer Science neither know nor understand.

A thread is locked to a core.

Your thread may go idle, get swapped out, and reloaded to a different core, but it can only use a single core.

Your main event loop can only execute in the primary thread.

If the class you design has to have events in the primary event queue for the primary thread in order to process I/O you are throttled by that single core. Even if you have a 20-core machine it will only use one.

During the early 1990s, right after Chrysler introduced the Eagle Premier ES Ltd I did a project for Waste Management. Running under MS DOS on cast-off IBM PCs that had full height 10 MEG hard drives, I ran a full 16-port DigiBoard card. One core running with a 4.77 clock speed. Thank God for the GreenLeaf CommLib back then. Wish I would have saved my disks. Good ideas worth borrowing there.

Sorry for the rant, but I write "how the sausage is made" posts.

RolandHughes commented 2 weeks ago

One thing I forgot to tell you. I'm using the AHS version of MX Linux. It's a bit different from the normal MX Linux, as I will always get an up-to-date graphics stack (Mesa, libdrm,...) and kernel. They are different from what is available on vanilla Debian Stable. I will not switch to the normal MX Linux because it doesn't work well on my hardware (my USB dongle will not work).

Doesn't matter. Please quickly test and merge. I didn't think I would have this morning off to work on the code base. I want to start a new branch from a freshly merged master. Lots of legacy bugs left to clean up before I can move forward with GLFW.

Honestly I find it reprehensible that CopperSpice didn't include a serial port class. That's the backbone of industrial device interfaces.

If you ever find a first edition of this book, look in the front. All later printings of the book removed the section covering the history of RS-232 and explain the original use of each of the 25 pins. RS-232 was a voltage specification created during the Industrial Revolution. I am kicking myself for having given my copy away to some noob decades ago. All later printings took that out. It is the most valuable part of the book.

RolandHughes commented 2 weeks ago

I updated the project roadmap. We are definitely not going to use the Qt serial port class.

RolandHughes commented 1 week ago

I merged and created a new branch, but please test. If there are bugs they need to be fixed. Thanks!

chatchoi commented 1 week ago

To save money, I bought used RAM on the internet. But it turned out that they don't last long. I'm running with only one spare 4GB RAM stick. Even if I modified your build script to use ninja install -j2, it will always fail in the linking step. I have tried to tweak the build script for several days without success. Even though I can let the machine run all day, -j1 is still too slow. I'm afraid I can't help you anymore. I'm sorry for my late reply.

RolandHughes commented 1 week ago

Please tell me what kind you need. If you have an older machine, I have some modules in a drawer that were pulls from machines I discarded. I have to dig it out and look at it once I get the specs from you. Probably easier for you to just look up the memory specs on-line for your mobo.

This would be desktop, not laptop memory.

I tend to be a pack rat, saving stuff others would throw away. That's why it is still sitting in a drawer rather than having been sold.

If one or more of them work for you I can just ship them to you.

Better idea! What country are you in?

One of my favorite old machines that I listed on ebid hasn't gotten any actual bids despite close to a hundred people looking at it.

https://www.ebid.net/us/for-sale/custom-amd-phenom-ii-6-core-asrock-970-extreme4-680w-ps-24gb-ram-1tb-disk-nvidia-222938234.htm

If you can provide some kind of shipping address, not a PO box, I can see what it would cost to ship to you. I set that thing up for embedded systems development when I bought it long ago and could never bring myself to get rid of it. I even replaced the original motherboard and CPU when they died a few years ago and that made no sense at all.

One of your parents work addresses, or whatever. Talk it over with them.

I don't remember if the little red switch on the power supply toggles between American and EU standard outlet current, but can look. It's in a box, but not taped shut.

Of all the old machines I have laying around that are looking for new homes, that machine will be the lightest weight to ship.

Despite appearances, this machine, https://www.ebid.net/us/for-sale/hp-elite-8300-24gb-ram-1tb-drive-nvidia-nvs-510-ls-120-223082357.htm is more than twice the weight. It's from a time when a power supply was a power supply, you have to bend at the knees or risk an injury when picking that little thing up.

Talk it over with your folks. Once I have a shipping address that either my Post Office or UPS, I can look into what it costs to ship. You will actually use it, not part it out, so it will be a good home.

Would just be the computer and power cord. No monitors, keyboard, or mice. I may stick in an extra drive. Not in my office right now. I think the drive that is in it is a spinning disk. I think I have an old 480 (definitely under 500GB) Samsung SSD in my spare drives cabinet. That way if the spinning disk doesn't survive shipping you will still have a functional machine. Sometimes when an old drive is mounted in a computer it doesn't ship well.

Talk it over with the folks.

You will actually use the thing so it will have a good home. . . . assuming you don't live in a country where their are shipping restrictions. I know one cannot ship a faxmodem to Egypt and a few other countries.

I think I ran -j2 or -j3 to be "safe" due to the 8GB per core compilers seem to want when compiling large templates. Used to take about 3 hours to do a build on it.

You can cheat a bit though during your OS install. Create a swap partition at least as big as the physical RAM. Build will run slower as it does the extra I/O swapping to disk but it should be able to use all 6-core.

chatchoi commented 1 week ago

I upgraded to 12GB of RAM. ninja install -j4 still fails in the linking step. I think will try -j3 and -j2. The fewer jobs, the slower it gets. The compiling processes are fine with the default make jobs of ninja (no modifications to your build script). But the linking step is problematic. I know the word step used by me is misleading. The linking step is not a separate step. It's not that all of the compiling processes are completed and stopped and the linker is invoked. They implement parallel compilation very stupidly. The linking step will always eat more RAM than compiling. But they will treat it the same as the compiling process, and thus one linking process (linking the current target) eats up all of the RAM when running with other compiling processes (compiling the next target) and gets killed by the OS.

chatchoi commented 1 week ago

BTW, thank you for your kindness, but I don't live in the US, nor do I think I will want to receive any help from people in a foreign country. My PC is always old and underpowered. It only has 4 cores without SMT and 8GB of RAM before both of the RAM sticks failed. Upgrading to 12GB of RAM means I have added 4GB of RAM compared to my last setup. The last time I tested compiling LsCs on MX Linux, I used ninja install -j1 and let it run all day. My curiosity forced me to try to know if I could compile it at all. Now I know that I can definitely compile it, but everything will be very slow. I know that I can't help you anymore. I'm unqualified from the beginning. Sorry for wasting your time with me.

RolandHughes commented 1 week ago

Be sure to ask your folks about getting that old 6-core. My two main dev machines are 20-core with 128GB of RAM, so I don't have compilation issues.

Theoretically, your issues could be MX specific. They also could be a poor install (most likely due to defaults).

Check the size of your swap partition.

Many of "today's" Linux distros assume RAM is so cheap nothing will ever swap so they don't create a swap partition. Ah, MX Linux has known swap partition issues.

Personally, I would use System Rescue CD to adjust disk partitions so you had at least 48GB for Swap. Then I would follow the above article and adjust the "swappiness."

As your folks about this other computer. It's just sitting here. Oh, I couldn't find the 480SSD, the old spare I had was a Samsung 840 1TB so stuck that in. Must have stuck the 480 in something else. The Samsung wasn't as fast as the 480 if memory serves, but it was reliable.

chatchoi commented 1 week ago

You should mistake me with someone else. I already told you that I'm only a high school student on the first thread I posted on your waylandgui repository. I don't have the budget to go 20-core with 128GB of RAM :smile:

RolandHughes commented 1 week ago

BTW, thank you for your kindness, but I don't live in the US, nor do I think I will want to receive any help from people in a foreign country. My PC is always old and underpowered. It only has 4 cores without SMT and 8GB of RAM before both of the RAM sticks failed. Upgrading to 12GB of RAM means I have added 4GB of RAM compared to my last setup. The last time I tested compiling LsCs on MX Linux, I used ninja install -j1 and let it run all day. My curiosity forced me to try to know if I could compile it at all. Now I know that I can definitely compile it, but everything will be very slow. I know that I can't help you anymore. I'm unqualified from the beginning. Sorry for wasting your time with me.

No problem.

See my previous reply about "swappiness" and your Swap partition size.

Everybody thinks "Linux is Linux" but it is not. Each distro has its own mistakes on top of core Linux mistakes.

RolandHughes commented 1 week ago

You should mistake me with someone else. I already told you that I'm only a high school student on the first thread I posted on your waylandgui repository. I don't have the budget to go 20-core with 128GB of RAM πŸ˜„

No, not mistaking you. High schoolers have time to read documentation. πŸ˜‚

20-core don't cost what it used to.

Lots of old z820 boxes out there fully loaded. Back in the day they were $8K or more. Now they are under $500. You just have to find a use computer place in your country. They probably have a few they can't unload.πŸ™€ Everybody fixated on i-9 stuff these days. Having both an i-9 gen-13 and this old z820 with 20-core, I gotta say, the z820 holds its own.

mh466lfa commented 6 days ago

If you want to submit code based on their project it must be from version 5.6 or before which has a compatible license with CopperSpice. Any new code must be original work or a compatible or less restrictive license.

With newer releases of their project they moved to LGPL 3 and this license is not compatible with our LGPL 2.1 license. We strongly advise that our contributors avoid even looking at source code in 5.7 or later.

Barbara

The above is quoted from an issue on CopperSpice project's repository. I think the same is also apply for LsCs. So, this Qt 5.6.3 is the last version you can import code from:

https://github.com/qt/qt5/tree/v5.6.3

Here is the QSerialPort of Qt 5.6.3:

https://github.com/qt/qtserialport/tree/2f7b2533b323644289c60370f68492aea6b1a625

Please note, even with Qt 5.6.3, if there is no LICENSE.LGPLv21 file on the Qt module, you can't import that code. The QSerialPort of Qt 5.6.3 above has a LICENSE.LGPLv21 file, so I think you are fine to import the code from it.

RolandHughes commented 6 days ago

Please note, even with Qt 5.6.3, if there is no LICENSE.LGPLv21 file on the Qt module, you can't import that code. The QSerialPort of Qt 5.6.3 above has a LICENSE.LGPLv21 file, so I think you are fine to import the code from it.

Oh, we aren't going to use any of it. Might not have been clear in the "How the sausage is made" post above. What they have is sh*t.

The only way I could even consider using some of it is if I (or someone else) takes the time to fix the base QIODevice class. It must create/run/operate the device in a different thread that does not have Affinity to the core running the primary thread.

A huge limitation of legacy application frameworks is the single-thread-y-ness imposed by a main event loop. Threads bought us close to nothing all the way through the 486 processor. Yes, theoretically a different thread could execute while one was blocked waiting for I/O, but everything was INT21 going through the BIOS. Check out the list of interrupts and their request vectors.

INT 8 thru 0FH - Vectored Hardware Lines (in IBM at least)
In IBM, these 8 interrupts are generated in
response to IRQ 0 through IRQ 7 (if enabled via port 21).
IRQ0 - Timer interrupt
IRQ1 - Keyboard interrupt
IRQ4 - serial controller
IRQ6 - diskette interrupt
IRQ7 - PPI interrupt

They left out IRQ3 which was split between COM2 and COM4.

This is the era from whence Qt, QIODevice, and the serial port class came. Architecturally it is a catastrophe.

The new SerialPort device must:

  1. Operate in an independent thread
  2. Support the 4 "standard" COM ports
  3. Support this 16-port card.
  4. Provide X, Y, and Z-Modem transfer. Maybe even Kermit.
  5. Support user configurable "packet" or "record" I/O. Preferably with ring buffers.

A later update will need to support synchronous communications. Initially this card and the RS-485 capabilities of the Verdin dev board.

A "regular" developer with a "standard desktop" could get items 1, 2, 5, and 5 developed. Either they already have a standard COM port or they can quickly add it via add-in card.

You would have to be an industrial or embedded systems dev to attempt the 16-port card and synchronous communications support features.

I'm glad I don't have to dig up the Digiboard stuff I did under DOS because there was a lot of DIP switches and jumpers to set on those boards so DOS could run 24 serial ports on a single interrupt.

I am undecided about supporting the USB serial port service devices and USB serial ports in general. The cheap USB to single serial port devices out there are really sketchy.

chatchoi commented 5 days ago

It turned out that the problem is not about swappiness, but swap space after all. When I upgraded to 12GB of RAM, I thought that swap space was no longer needed, so I removed the swap partition to extend my root partition. The swap partition was only 8 GB, which is clearly not enough according to what I observed in htop during the build. I added a 12GB swap file and everything is fine now.

chatchoi commented 5 days ago

The build completed after 3 hours. All examples running fine. This is the result of the command you requested me to run:

$ cd /usr/bin
$ ls *_ls
lconvert_ls  linguist_ls  lrelease_ls  lupdate_ls  rcc_ls  uic_ls
chatchoi commented 5 days ago

I merged the ls-cs-0.1.2 branch to master as you have requested.

chatchoi commented 5 days ago

Disclaimer: When I said the examples are running fine, I mean they are just running without crashing. I have no idea about what they are supposed to do to verify that they are doing fine or not. Please keep this in mind.

RolandHughes commented 5 days ago

It turned out that the problem is not about swappiness, but swap space after all. When I upgraded to 12GB of RAM, I thought that swap space was no longer needed, so I removed the swap partition to extend my root partition. The swap partition was only 8 GB, which is clearly not enough according to what I observed in htop during the build. I added a 12GB swap file and everything is fine now.

There was great rejoicing!πŸ₯³

RolandHughes commented 5 days ago

It turned out that the problem is not about swappiness, but swap space after all. When I upgraded to 12GB of RAM, I thought that swap space was no longer needed, so I removed the swap partition to extend my root partition. The swap partition was only 8 GB, which is clearly not enough according to what I observed in htop during the build. I added a 12GB swap file and everything is fine now.

For "low RAM" machines, which is anything having less than 32GB you need to have a swap partition that provides 8GB per core. Most of the time it will be empty, but during a really big build like this one, it can really speed things up.

If you have a desktop computer instead of a laptop, it is best to boot from SSD, but have this build run on a good spinning disk. Not an el-cheapo generic disk. Lately I've been buying WD Black. The Blue isn't what it used to be. I've also been buying some Seagate Barracuda drives.

SSD are incredibly fast read and "fake fast" write. They have a large cache which makes you "think" it wrote fast. Once you pop past the end of the cache though, they are incredibly slow at writing. Builds like this create thousands of temporary files. Easily pop past end of SSD cache.

Oh, if you have "lots" of RAM you still might want to at least 12GB. I just checked and Ubuntu 20.04 on this 20-core 120GB z820 didn't create a swap partition. Apparently it created a swap file.

RolandHughes commented 5 days ago

The build completed after 3 hours. All examples running fine. This is the result of the command you requested me to run:

$ cd /usr/bin
$ ls *_ls
lconvert_ls  linguist_ls  lrelease_ls  lupdate_ls  rcc_ls  uic_ls

πŸ‘

RolandHughes commented 5 days ago

Disclaimer: When I said the examples are running fine, I mean they are just running without crashing. I have no idea about what they are supposed to do to verify that they are doing fine or not. Please keep this in mind.

At this point they don't "do" much.

console-hello is a "see spot run" application interacting with the console. It has one useful part for developers.

Hello World!

Version:    1.3.4
Build Date: 11/11/2024
Sizes on your machine:
     unsigned char      1
     qint8              1
     qint16             2
     qint32             4
     qint64             8
     short int          2
     int                4
     long               8
     long long          8
     float              4
     double             8
     long double        16

Never ass-u-me you know the size. long long on many platforms is 16, but not this one. If you read the comments in the code it exists for developers to use for experimentation. An example of that is the regular expression stuff later in the program.

I actually used console-hello for the new example I'm working on cups-experiment. I am ripping out the buggy, legacy, ppd CUPS interface and replacing it with the modern calls.

gui-hello dumps the same output to a text edit in a main window. It's a simple thing for developers wishing to test something graphical.

simple-text-editor is just that. You can open and edit a file, save it. There is no syntax highlighting or spell checking. If you click on Help and choose "About" it will tell you what it is . . . just has wrong title on About box. That's something you could actually fix.

RolandHughes commented 5 days ago

Disclaimer: When I said the examples are running fine, I mean they are just running without crashing. I have no idea about what they are supposed to do to verify that they are doing fine or not. Please keep this in mind.

Frack!

I thought I did that already. image GitKraken says no I didn't.

Thought I already created 0.1.3 and was working there. I've got a bunch of incomplete changes for CUPS.

Looks like I've got some stashing, pulling, and branching to do. 😧

mh466lfa commented 4 days ago

I had a quick look at the build script. You are setting it to build in Debug mode. This is the reason why it takes too much computer memory to link the library (because the object files are bloated with debug info). You can build the library in release mode but still have debug info with -DCMAKE_BUILD_TYPE=RelWithDebInfo.

RolandHughes commented 4 days ago

I had a quick look at the build script. You are setting it to build in Debug mode. This is the reason why it takes too much computer memory to link the library (because the object files are bloated with debug info). You can build the library in release mode but still have debug info with -DCMAKE_BUILD_TYPE=RelWithDebInfo.

But generally not enough debug info to step through with Emacs GDB interface.

Even in release mode CopperSpice took seven train cars of RAM. The academics got enamored with template meta programming.

Feel free to compile in release mode for your testing. It will still be like a bus load of obese people that just got dropped off at an all-you-can-eat desert buffet.

Templates are nearly impossible to debug. I worked at a client site who canned a developer that firmly believed there was no computer programming problem you couldn't solve by nesting templates at least seven layers deep. If you tried to step into any of the areas where they were having trouble GDB would crash. It physically couldn't unwind it. With templates you have to use the fool's datatype: auto. This stops you from using the "default debugger" of printing stuff out to the screen or a log file because the I/O statements need to know a type. With auto, especially auto multiple layers in, the compiler cannot determine the data type or if it has a stream operator.

Don't ever become a fool.

I see examples all over the Internet of fools using auto everywhere. In the embedded systems/medical device world you can easily fail your SAFETY inspection/evaluation with auto anywhere but in a generic container template. Using auto makes your code not type-safe. A change in another part of a program can change the data type auto "determines" should be used when the program is compiled again. Take a look at the size output from gui-console. If auto shrinks from long to int or long double to float because of a change elsewhere, your generic routine now has a drastic size limitation that will introduce overflow and other errors. If you don't throw an exception or otherwise crash right there, good luck finding that in a large code project running many threads and with core dumps disabled (default in Ubuntu and many distros these days.)

chatchoi commented 4 days ago

I don't know how to use both Emacs and GDB. I can't help you with debugging.

RolandHughes commented 4 days ago

I don't know how to use both Emacs and GDB. I can't help you with debugging.

LOL. I wasn't suggesting you would debug, just explaining why I was currently building with full debug. For testing you can switch it to release.