Closed Vort closed 3 weeks ago
I wonder if it is related to jumpy TSC in Windows XP ...
Frogger knows nothing about Power Management Timer. These are separate problems.
Something is wrong with implementation of sync=slowdown
.
If I add hack to call msleep
a little less often, time jumps almost disappear:
diff --git a/bochs/iodev/slowdown_timer.cc b/bochs/iodev/slowdown_timer.cc
index 2d715f50c..cf4c456dd 100644
--- a/bochs/iodev/slowdown_timer.cc
+++ b/bochs/iodev/slowdown_timer.cc
@@ -144,7 +144,8 @@ void bx_slowdown_timer_c::handle_timer()
#if BX_HAVE_USLEEP
usleep(s.Q);
#elif BX_HAVE_MSLEEP
- msleep(usectomsec((Bit32u)s.Q));
+ if (rand() % 100 > 20)
+ msleep(usectomsec((Bit32u)s.Q));
#elif BX_HAVE_SLEEP
sleep(usectosec(s.Q));
#else
This is not proper solution of course. More investigation is needed.
I have no idea why, but my next hack allows to get smooth animations in games for me:
diff --git a/bochs/iodev/slowdown_timer.cc b/bochs/iodev/slowdown_timer.cc
index 2d715f50c..3535804b9 100644
--- a/bochs/iodev/slowdown_timer.cc
+++ b/bochs/iodev/slowdown_timer.cc
@@ -26,6 +26,8 @@
#include "pc_system.h"
#include "slowdown_timer.h"
+#include <chrono>
+
#include <errno.h>
#if !defined(_MSC_VER)
#include <unistd.h>
@@ -59,6 +61,12 @@ bx_slowdown_timer_c::bx_slowdown_timer_c()
s.timer_handle=BX_NULL_TIMER_HANDLE;
}
+uint64_t get_monotonic_microseconds()
+{
+ return std::chrono::duration_cast<std::chrono::microseconds>(
+ std::chrono::steady_clock::now().time_since_epoch()).count();
+}
+
void bx_slowdown_timer_c::init(void)
{
// Return early if slowdown timer not selected
@@ -73,7 +81,7 @@ void bx_slowdown_timer_c::init(void)
if(s.MAXmultiplier<1)
s.MAXmultiplier=1;
- s.start_time = sectousec(time(NULL));
+ s.start_time = get_monotonic_microseconds();
s.start_emulated_time = bx_pc_system.time_usec();
s.lasttime=0;
if (s.timer_handle == BX_NULL_TIMER_HANDLE) {
@@ -103,7 +111,7 @@ void bx_slowdown_timer_c::handle_timer()
{
Bit64u total_emu_time = (bx_pc_system.time_usec()) - s.start_emulated_time;
Bit64u wanttime = s.lasttime+s.Q;
- Bit64u totaltime = sectousec(time(NULL)) - s.start_time;
+ Bit64u totaltime = get_monotonic_microseconds() - s.start_time;
Bit64u thistime=(wanttime>totaltime)?wanttime:totaltime;
#if BX_SLOWDOWN_PRINTF_FEEDBACK
I just arbitrarily changed less precise time()
with more precise <chrono>
equivalent.
Probably I broke something in the process (especially compatibility), but I hope it will give ideas about how to make proper fix.
I tested this hack not only with 1M IPS Frogger, but also with 5M IPS Superfrog. It is playable now.
I decided to visualize this problem.
I made modification, which collects 20 seconds of data, consisting of {totaltime_precise, total_emu_time}
pairs.
Then I performed collection for two runs of Frogger, with #if 0
(present algorithm, with irregularity) and #if 1
(algorithm with my modification) and plotted two lines.
X axis corresponds to totaltime_precise
value, Y axis - to total_emu_time - totaltime_precise
value.
Chart shows that present algorithm produces time jumps with amplitude of about 120 ms (yellow line).
diff --git a/bochs/iodev/slowdown_timer.cc b/bochs/iodev/slowdown_timer.cc
index 2d715f50c..ad311db53 100644
--- a/bochs/iodev/slowdown_timer.cc
+++ b/bochs/iodev/slowdown_timer.cc
@@ -26,6 +26,10 @@
#include "pc_system.h"
#include "slowdown_timer.h"
+#include <chrono>
+#include <fstream>
+#include <map>
+
#include <errno.h>
#if !defined(_MSC_VER)
#include <unistd.h>
@@ -50,15 +54,26 @@
bx_slowdown_timer_c bx_slowdown_timer;
+std::map<uint64_t, uint64_t> points;
+std::ofstream* points_file;
+
bx_slowdown_timer_c::bx_slowdown_timer_c()
{
put("slowdown_timer", "STIMER");
+ points_file = new std::ofstream();
+
s.start_time=0;
s.start_emulated_time=0;
s.timer_handle=BX_NULL_TIMER_HANDLE;
}
+uint64_t get_monotonic_microseconds()
+{
+ return std::chrono::duration_cast<std::chrono::microseconds>(
+ std::chrono::steady_clock::now().time_since_epoch()).count();
+}
+
void bx_slowdown_timer_c::init(void)
{
// Return early if slowdown timer not selected
@@ -74,6 +89,7 @@ void bx_slowdown_timer_c::init(void)
s.MAXmultiplier=1;
s.start_time = sectousec(time(NULL));
+ s.start_time_precise = get_monotonic_microseconds();
s.start_emulated_time = bx_pc_system.time_usec();
s.lasttime=0;
if (s.timer_handle == BX_NULL_TIMER_HANDLE) {
@@ -103,8 +119,22 @@ void bx_slowdown_timer_c::handle_timer()
{
Bit64u total_emu_time = (bx_pc_system.time_usec()) - s.start_emulated_time;
Bit64u wanttime = s.lasttime+s.Q;
+ Bit64u totaltime_precise = get_monotonic_microseconds() - s.start_time_precise;
+#if 0
+ Bit64u totaltime = totaltime_precise;
+#else
Bit64u totaltime = sectousec(time(NULL)) - s.start_time;
+#endif
Bit64u thistime=(wanttime>totaltime)?wanttime:totaltime;
+ points.emplace(totaltime_precise, total_emu_time);
+ if (totaltime_precise > 20000000 && points_file)
+ {
+ points_file->open("points.txt");
+ for (auto const& p : points)
+ *points_file << "{" << p.first << "," << p.second << "}, ";
+ points_file->close();
+ points_file = nullptr;
+ }
#if BX_SLOWDOWN_PRINTF_FEEDBACK
printf("Entering slowdown timer handler\n");
diff --git a/bochs/iodev/slowdown_timer.h b/bochs/iodev/slowdown_timer.h
index aae1ab30d..e05ee3746 100644
--- a/bochs/iodev/slowdown_timer.h
+++ b/bochs/iodev/slowdown_timer.h
@@ -26,6 +26,7 @@ class bx_slowdown_timer_c : public logfunctions {
private:
struct {
Bit64u start_time;
+ Bit64u start_time_precise;
Bit64u start_emulated_time;
Bit64u lasttime;
There are many ways to obtain irregularity with present algorithm. Charts show that it manifests itself even at early stages of boot.
But one reproduction method may be more useful than others: Irregularity is more evident in cases when large amount of screen pixels are changed by program inside of guest OS. So to have better control over what is happening, I made synthetic example - program which produces animation and which resembles what is going on in Frogger. I tested it in DOSBox and with real hardware, in both cases animation was smooth. Here are test files: Bar.zip. Screenshot: Code:
#include <dos.h>
#include <math.h>
#include <stdlib.h>
#include <conio.h>
#include <iostream.h>
#include <graphics.h>
int main()
{
int gdriver = VGA;
int gmode = VGAHI;
initgraph(&gdriver, &gmode, "");
unsigned int reload = 0xe426;
outportb(0x43, 0x34);
outportb(0x40, reload & 0xFF);
outportb(0x40, reload >> 8);
int width = 16;
int y1 = 240 - width / 2;
int y2 = 240 + width / 2;
int start = 0;
int end = 0;
double countF = 0.0;
unsigned int prevCount = 0;
while (!kbhit())
{
disable();
outportb(0x43, 0);
unsigned int count = inportb(0x40);
count |= inportb(0x40) << 8;
enable();
countF += prevCount;
countF -= count;
if (count > prevCount)
countF += reload;
double t = countF / 1193182.0 * 3.0 / 16.0;
double tf = t - (int)t;
end = tf * 640;
if (start != end)
{
for (int i = 0; i < 100; i++)
putpixel(rand() % 640, rand() % 480, rand() % 2 * 15);
if (end > start)
bar(start, y1, end - 1, y2);
else
{
bar(start, y1, 639, y2);
setfillstyle(SOLID_FILL, (1 - (int)t % 2) * 15);
bar(0, y1, end - 1, y2);
}
}
start = end;
prevCount = count;
}
closegraph();
outportb(0x43, 0x34);
outportb(0x40, 0x00);
outportb(0x40, 0x00);
return 0;
}
I worried about compatibility because of <chrono>
, but after more code reading I found than Bochs already have similar functionality in bx_get_realtime64_usec
. It is not really usec, more like msec for Windows platform, but looks like for slowdown_timer it is enough.
I'm not entirely understand how slowdown_timer works, but if increase of timestamp precision gets rid of time irregularity and not produce any negative effects, then it is fine to make such change.
@vruppert, please check if #377 allows to get more smooth animation with your configurations (or at least if it not makes things worse).
I'm not really familiar with the Bochs timer code. For the slowdown timer I know that it should do periodic sleep calls in case the guest timer is ahead of realtime. For the guest side the timer runs at constant speed, but on the host side it can cause issues in games and with audio / video playback. I remember that on SF there is still an old feature request called "Adjustable sleep rate for emulator slowdown". Since Linux doesn't seem to be affected, please let me know which games / animations or similar code I should try on Windows with and without the changes from #377.
Since Linux doesn't seem to be affected
This is strange. Probably you just have fast processor and screen updates take very little time. I guess it is possible to force-produce glitches by adding artificial delays (1 - 4 ms) to screen update code, but I did not tried to do this.
please let me know which games / animations or similar code I should try on Windows with and without the changes from https://github.com/bochs-emu/Bochs/pull/377.
First of all, check two archives from this topic.
Note that most important settings are update_freq=0, realtime=0
and ips=...
.
Next you can test Superfrog game, which I posted earlier (it have glitchy audio, but it is unrelated), and Supaplex.zip with 20M IPS.
Also playing BigBuckBunny_320x180_crop_cinepak_adpcm.avi
from #369 with default player of Windows 95 and Cirrus card with 70M IPS triggered this problem.
For the slowdown timer I know that it should do periodic sleep calls in case the guest timer is ahead of realtime.
With present code delays are spread incorrectly: Like "delay, delay, delay, delay, delay, delay, no delay, no delay, no delay" instead of "delay, delay, no delay, delay, delay, no delay, delay, delay, no delay" (which #377 should produce).
Additional information: I have lots of programs running on my host OS (Windows 7), which change system timer interval. That's why I previously made tests with 1 ms interval (TimerTool):
To cover more possible cases, I decided to test how Bochs behave with default interval. For it to happen, I closed all opened programs until interval changed to 15.6 ms. After Bochs was launched, interval was changed to 10ms (don't know which API call caused that).
With such value, animation (tested with Bar.zip) was even worse than before. #377 produced acceptable result despite using different interval value.
It is probably makes sense to set it intentionally to 1 ms (timeBeginPeriod(1)
) to match what
slowdown_timer expects, but that's a topic for another time I guess.
One method of providing uniform amount of resources to program is using
sync=slowdown
option. However in reality, even withips=1000000
, resources to program are delivered with changing (oscillating) speed. It makes hard to use programs (games), which requires correctly timed inputs from user.For example, try to track single car movement in Frogger (frogger.zip):
https://github.com/bochs-emu/Bochs/assets/1242858/0f3ffeff-1dc9-43f5-a4ad-fb2305a2cdeb
It moves with approximately equal speed, then jumps forward, then moves evenly again and so on. Even ingame music suffers from this effect.
Same game in DOSBox produce smooth movement.
Version: d9d07c03ec115a01a15fc705ab5f96ea951716e4.