Closed flathat closed 9 years ago
Cool topic!
Just as another data point, with an idling lane (driver looking for scale input, apache+firefox polling via ajax for available input) on an Atom D2500 I see: Apache @ ~10% CPU Firefox @ ~1-2% CPU pos.exe @ 0% CPU
I'm a bit surprised to see your apache number so high. Linux by far the preferred platform for apache so I'd expect to see better performance there. It's possible that the scale driver is skewing the numbers. I'm curious what the idle numbers are for apache+firefox with the driver stopped. Their actions should be perfectly identical in the idle case where no input is occurring.
As for responsiveness, I don't think all CPU usage is equal. The loop in the scale driver that must be causing the 100% measure is effectively:
while (pos_is_running) {
serial_port.ReadNextByte();
}
Even if that read returns "instantly" when no data is available, I/O takes forever on a CPU timescale. So the OS scheduler has plenty of time to insert other processes' instructions during those I/O gaps. OTOH if you have a loop with no I/O (or very little) then there's no gap in the series of instructions to allow other processes and responsiveness will nosedive.
If you want to insert a sleep period, I'd try something like this:
override public void Read()
{
string buffer = "";
if (this.verbose_mode > 0)
System.Console.WriteLine("Reading serial data");
sp.Write("S14\r");
while (SPH_Running) {
try {
int b = sp.ReadByte();
if (b == 13) {
if (this.verbose_mode > 0)
System.Console.WriteLine("RECV FROM SCALE: "+buffer);
buffer = this.ParseData(buffer);
if (buffer != null){
if (this.verbose_mode > 0)
System.Console.WriteLine("PASS TO POS: "+buffer);
this.PushOutput(buffer);
}
buffer = "";
} else {
buffer += ((char)b).ToString();
}
} catch {
Thread.Sleep(100);
}
}
}
I am assuming that ReadByte on Linux returns instantly or close to it rather than using the specified 500ms read timeout. I am further assuming that it throws an exception of some kind when this happens so that the sleep can occur in the catch block. You wouldn't want to sleep after a successful read otherwise it would sleep 15+ times while reading a UPC.
It's also possible that this won't do anything at all because the driver is pausing 500ms on the read and the difference we're seeing is due to differences in how the OSes measure CPU utilization.
Briefer pre-call notes on the other points:
A RAM disk should result in faster I/O. That would be beneficial in theory, provided that faster I/O doesn't exacerbate your CPU usage problem.
I haven't tried #321 either (no practical need & the parallel port is more convoluted to work with in Windows). The other thing I tried at one point awhile ago was decoupling the upload from the end of transaction. Transaction finishes, data rotates out of localtemptrans, and receipt prints. Then later as a separate process transaction data is shipped to the server. The problem I saw in using this was cashiers would finish the last customer of their shift and immediately print a tender report [from the server] which was missing the last transaction.
For product search, I'd first try isolating which search query is causing the slowdown. Put "return array();" at the beginning of all but one search module's search() method. Run a search using just that one module that still issues a query. Repeat until you know which one(s) are slow.
Also, I should have some time today to screenshare updates from the Wedge
2) Could you capture what some of these queries are?
4) Probably not a huge difference. We were just excited about using a new idea at the time.
Avoid extraneous CC uploads: 9cb08542348f5cf4dff735fecf81a733539ccbdf
Work towards getting trans_status on log records away from X to a better status.
Some basic performance tips: https://github.com/CORE-POS/IS4C/wiki/Installation#apache-suggestions
RE 1. Delay opening cash drawer at the end of a transaction.
I found that with Fannie on a remote (cloud) server the set of operations in ajax-end::cleartemptrans()
, including copying localtemptrans
to two places on the lane and uploading dtransactions
and credit card and efsnet data to Fannie almost always takes between 3.5 and 4.5 seconds (there was one outlier of 15 seconds). For the moment I've just moved opening the cash drawer and sending the receipt to the printer before calling cleartemptrans()
so that that happens while the cashier is moving to the next customer. It is a noticeable improvement in responsiveness. The next thing will be to try the #321 changes to protect the cleartemptrans()
operations from printer difficulty.
The CreditCard upload, even if, as in our case at the moment, there is nothing to do, takes about a third of that time, so Andy's 9cb08542348f5cf4dff735fecf81a733539ccbdf , which I've also implemented, is a help in it's own right, until the not-to-distant day we integrate credit cards.
By comparison when the lane and server are on the same machine cleartemptrans()
takes about 0.05 seconds. I'd be curious about the time for a LAN Fannie.
My timer:
$lastTime = $eventTime;
$lastTime2 = $eventTime;
$CORE_LOCAL->set('ccTermState','swipe');
cleartemptrans($receiptType);
$eventTime = microtime(true);
$event = "All cleartemptrans";
$timeLog .= sprintf("%s: %0.8f\n", $event, ($eventTime - $lastTime2));
/* Then at the end */
Database::logger($timeLog);
Added, this to Database.php
, which logs to queries.log
/**
Log a message to the lane log
@param $msg A string containing the message to log.
@return True on success, False on failure
*/
static public function logger($msg="")
{
$connection = Database::tDataConnect();
if (method_exists($connection, 'logger')) {
$ret = $connection->logger($msg);
} else {
$ret = False;
}
return $ret;
}
Bumping an old issue because this is directly relevant: b9b4800009f5e7120821e272174ffba372ae9c4d
Instead of sending data from the lane to the server by:
INSERT INTO dtransactions VALUES (some, data)
INSERT INTO dtransactions VALUES (some, data)
INSERT INTO dtransactions VALUES (some, data)
...
The changed version does:
INSERT INTO dtransactions VALUES (some, data), (some, data), (some, data)...
This is generally portable. SQL Server 2008+ supports it as does Postgres 9. A single query could offer a significant boost over multiple queries in the described scenario. I'm currently capping it at transfers < 500 records. In MySQL, the primary limitation is the server's max_allowed_packet setting. I have mine set to 32MB, but I haven't done any testing to figure out how many bytes a 499 row insert is. The failure case should be relatively harmless. If the INSERT is rejected for size (or any other reason), data will continue to accumulate on the lane as if it's offline until someone figures out the problem.
I'm again having to answer complaints about slowness. In addition to dealing with my immediate issue in the first paragraph below I'd like to gather information for a guide on the subject, describing issues, and techniques that work and those that don't or aren't worth the trouble. A pertinent bit of context is that the complaints are due in part to veteran cashiers being very familiar with the system now and not as patient as they once were; this is particularly the case for single-item transactions on busy days. The expectation is for desktop-application-like speed; the bar keeps rising.
ajax-end.php
, which handles what happens after the tender modules are finished, the upload of the transaction to Fannie happens before the drawer kick. Intermittent delay in connecting to the remote server may be part of the problem (to test this I want to run in offline mode for a while, by not callingDisplayLib::testremote()
fromajax-end.php::cleartemptrans()
). There is code inajax-end
about dealing with the script hanging if there is a problem with the printer. #321 led to a solution (that I'm embarrassed to say I haven't installed yet). My question now is whether, with the #321 solution, the drawer kick and receipt printing could be done before the upload without excessive risk, and let the upload happen in the lull between customers?PV
, lookups are a string search over the description in the whole table. I don't think intra-word strings are usually being searched so I wonder if its worth it to see if word-indexing the description will give faster results? At one time I wondered if caching that table in memory might help, but most of what I've read about this says that the automatic OS caching is probably doing all that can be done, so not worth trying to improve it.top
reports that mono, running the scanner/scale driver) is using 100% of CPU, apache2 30% and firefox 12%. On my desktop when CPU usage gets above 50% the slowdown is noticeable and at 100% very little happens. So something else must be the case here because the POS response is snappy if not lightning under that load. Would it be better still if the driver slept a bit, and s/s still respond quickly enough? Could the driver be restricted to one core on a multi-core machine, or is that happening already and I'm just not reading or using top correctly to see it?