Closed mj-xmr closed 4 months ago
I forgot to add the uninterrupted maintenance of Monero health report to the Candidate's Background section.
# Intro
I have officially signed the contract with MAGIC to start working from 12.09.2022 on, with my first deadline (of 3) to present the progress, set to 15.10.2022. So far my work can be described as having two major parts:
Due to changes in the package lists of Mac & Windows virtual machines on GitHub, some CI adaptations had to be made. I had made them earlier, as I knew that I'd start with MAGIC ahead of time, but they were pretty much related to this work package.
More notably, the below changes were made already after signing the contract:
One of my tasks contracted by MAGIC is to test if MyMonero fees were/are fingerprintable, or in other words, if it's possible to track the fee estimation data in the blockchain down to a particular alternative implementation, in this case: MyMonero. While reading on how to setup an experiment from the resources like this one (thanks, @j-berman ), I came to the following conclusions:
I already have to warn, that this means, that before I can achieve any concrete output, the framework will have to be created, at least in a very minimalistic form. Only then can I "release" the tasks and results for other Researchers for them to be able to carry on with their parts of the same tasks.
In the effort of creating the necessary infrastructure, so far I extracted this unmerged PR into a patch of my Monero Patches project. The PR, and so the patch, deal with being able to setup an error free (as opposed to Monero's master
branch) Testnet. This way everybody may try out our results in a reproducible manner. Yet the main advantage is, that it provides us, the Devs and Researchers, with more confidence in the validity of our research. That confidence will serve everybody best, if it can be propagated to other areas of similar research tasks, hence the need of creating a reusable framework of this kind.
Expecting that it's not the last such patch to be created, I created an automated test in "the patches project", that tests which patches are mergable with Monero's master
branch at a given time. The test is scheduled weekly. The hope is, that at least the MRL-relevant patches remain mergable. If not, then we'll learn it quickly from the test and I'll be able to react quickly.
In order to ease the act of manual verification, I also create a very rough reporting of the test, visible below, all in a single screen, hence without having to scroll:
I might flag the MRL patches are crucial and thus receive notifications whenever the patches deteriorate.
The need for such framework comes from the fact, that it will be easier to isolate a given input and perform reproducible and non-destructive experiments on a private Testnet. In future, even congestion stress-tests could be performed in a similar manner.
In this example the testing framework would involve an automated and repeatable act of:
Conceptually this would share the idea that's behind unit tests - isolation of a given degree of freedom. It also gives the possibility to use a private Testnet to "break things" while experimenting, without facing negative consequences - like spamming the public networks or triggering alerts.
On the contrary, you could try transmitting transactions on the Mainnet, which obviously modifies it:
And the "Testing Framework" will be not much more than a Python script that sets up the Testnet and executes a user's Python module on that in a controlled manner. Although it might take a week or two to prepare it, the benefits will be there to reap later on continuously for each similar task.
Broadly speaking, the framework, which is basing on the Testnet really, should help in answering a question on a tiny sample: "Would this kind of query be meaningful to execute on the MainNet?" before spending hours or days on waiting for the preliminary result on the MainNet. (ping @ack-j )
Architecturally the framework will consist of 4 major parts:
The verification of MyMonero remains the priority. Building of the framework will decrease the focus on the MyMonero, but not more than being a consequence of thinking forward, namely: choosing Python rather than an ad-hoc Bash script from the start. Python requires some up-front investments, but in the long run allows to do things that Bash can't:
As one may have already concluded, since the The Qt Data viewer is basically up-to-date and the need to perform the diagnosis on MyMonero (as the first example) is more important than preparation for creating any Time Series Analysis models, for now the work on tsqsim
remains halted. Instead, I'll be focusing on MyMonero test and the generic testing framework, based on the Testnet. I hope to deliver the first meaningful findings until the first deadline of 15.10.2022.
Hello everybody. It's time for my first report.
A side note - except for this very paragraph in italics, this report in exactly the same form, has been sent to the MAGIC Panel for an approval, in order to make sure, that no sensitive/controversial information are shared, etc. I'm posting this now, after receiving the approval.
First of all, thank you for accepting my proposal and apologies for a vague description for the public. Please take into account that I'm both paranoid and unexperienced when it comes to sharing information about flaws of other peoples' software.
Across the time, it will become more obvious what I do and if a given piece of information is "priced in the market" already, or better: already addressed, like the MyMonero (allegedly) outdated fee calculation issue, that I'm about to present. MyMonero fee fingerprinting will however be the last major paragraph of this report, due to its length and specifics. The other parts OTOH can be characterized as being of an interesting variety and will thus be presented as first, to be able to better focus on the fee fingerprinting later on. A more in-depth description of the task and my current findings is discussed there.
Another issue, that I must address, is the massive delay that my proposal has suffered lately. I hope that in this first turbulent phase we can settle with a delayed payment (after passing a review of course). To keep this part of story short enough, the positive side of it is, that the time that I couldn't spend here, was spent on closing my business, that was lately nothing more than a huge waste of time. I'm very glad to leave it behind.
The tasks that I finalized in this period, fall into the below categories:
tsqsim
updates, as plannedBefore even submitting the proposal to MAGIC, but knowing that it would happen soon, I had contributed relevant code to a general-purpose library that I use across my projects: EnjoLib
My relevant contribution to the project are dependency-free ASCII plots, that can even be animated. In the rawest form, this is how they are able to represent the data in a quick and compact way:
I use this plotting functionality in both tsqsim
and SolOptXMR
. Although SolOptXMR
is out of scope for this proposal, I have to present a demo from the project, to show how the raw plots can be turned into something more flashy:
(More SolOptXMR examples here)
You'll see later on how I started to use them in my research, but I believe, that you can already imagine the possibilities, and how such a simple and compact thing can keep you from becoming distracted by having to look to another window to see any plots (for example generated by a supplementing Python script).
The project transactions-export allows to export the Monero transactions from a blockchain data file into a human-readable CSV files.
I chose this as an alternative to @neptuneresearch 's tools, as described in the fingerprinting-a-flood-data project. Unfortunately I wasn't able to meet all the requirements to reproduce the data acquisition as per description due to hardware and software restrictions at the same time (at a given place I either have one (x)or the other). In order not to spend too much time on a branch that started to look too distant to me at these circumstances, I diverted to @moneroexamples 's tools and reaped the profits very soon.
So far I submitted a patch, that together with the values already being exported for each transaction, adds the fee values, that we're after. I also submitted this tiny one for the record.
I'm considering submitting another one, that would enable exporting the CSV file directly to a gzip
ed archive, in order to spare more than 70% of the size of original file. This idea is associated with my wish of making it easier for everybody to gain access to human-readable exported data, which counts as the "infrastructure" category. For this reason, I'll address this in an another paragraph.
I started updating tsqsim from its important dependency: the URT library, that delivers important functionalities related to Time Series Analysis: objective indicators that describe the level of stationarity of a given series, without assessing of which, it makes little sense to build any time series prediction models. Before the update, the library would consume so much RAM for compilation, that the even more RAM-consuming debug mode was crashing my IDE. Now it takes 50% of the original RAM requirements. As a side effect, the library's compilation time also reduced by 50%. To figure out the right weak spots, I used the same skill set, that I was using for reducing the compilation time of Monero-Core.
Next, I decided to update the Qt Data Viewer app, as its outdated state was making it impossible to be used under the recent versions of Linux distributions and so under OSX. I initially thought that I'd be forced to update the backbone library, the amazing QCustomPlot from version 1 to 2, which would be a great hassle, so I prepared the required dependency first. Later on, once I focused for a whole day on the causes of the failing compilation, I realized that the solution was easier than I had initially expected, thus sparing a lot of time budget on this pre-planned task. The end result is, that the app now works under all versions of Linux and OSX.
There was a hanging benchmark
branch, as requested during a Community Meeting in February. The branch was introducing not only the benchmarking functionality, that proved almost 2 orders of magnitude speed superiority of tsqsim
, but also a possibility to use Python scripts within the framework, aside from the already available R scripts. The inclusion of Python brings a wide variety of additional tools. The problem was, that through my effort of making it all a fair comparison between the frameworks, I had to ruin the interface by a lot. This PR brings the interface back to its usability.
The last notable change would be adding to the already available time steps, the weekly discrete time steps, as per Rucknium's request. The rationale was, that the weekly time steps encompass (or: neutralize) the humans' various daily habits across a week, and thus are the commonly used time steps in econometrics. Even though the change was small, it took me some time to figure out why it didn't initially work, because this change tackled a portion of my code, that I considered stable (too) long time ago. I took the liberty of refactoring this old code to bring it to more state-of-the-art a shape.
The two next changes are more of a preparation for the further steps of fingerprinting research.
Because the exported blockchain data are quite large and don't fit into memory, special care has to be taken to process them line-by-line and to dispose them off in the same way. This update handles this. This idea was introduced when I was trying out various ways of handling the data, but for the time being I settled with using Python scripts, that I'll mention later.
The last change brings about the ability to export the blockchain using automated scripts, rather than downloading them from my server in a pre-processed text form. I think this is an essential part, needed for completeness and transparency. It's still a WIP though, as the fee fingerprinting remains the main focus for now.
As already mentioned, I want to make the research and its verification easier for a curious Amateur and even a Researcher, who would prefer to perform the actual research, rather than having to combat some obstacles. This means, that the related tools need to become simpler as well. I wrapped up the tools recently made available for this purpose in this report / subproject. The subproject deals with all of my MRL-related tasks.
However, as promised, I will re-post here one example of using the ASCII plotting, that I had contributed (see the lower bottom of both screenshots):
So far all that I've done for MRL directly, was my decoy analysis, together with @Rucknium. The current Fee Fingerprinting task is a kind of cold water for me, as I receive fewer orders and pointers compared to the previous project. I have to become adequately more creative, making it harder, albeit more open-ended a task. The "Fingerprinting a flood", by Mitchell P. Krawiec-Thayer, PhD (aka Isthmus) has been a great inspiration so far, yet even more is required from me. This means filling the gaps, that became apparent only in this specific task: We don't immediately observe any "floods" here anymore, but rather have a suspicion, whose effects come initially unnoticed when observing the blockchain's patters.
The task here is to first find more clues and search criteria and only after they're defined, we can start actively searching for them in the blockchain like a needle in a haystack. The search criteria definition includes defining the chronological relations between the intervening projects, in order to learn when exactly to expect a given pattern to occur (i.e.: a delayed update of a fork) or stop occurring (i.e.: the final update of the fork and tagging its public version), potentially gradually, as more and more users update their client software.
I started off with this task admittedly by "circling around it" first, namely: not addressing the issue itself just yet, but rather trying to find as many ways of observing the allegation as possible. Ideally also adding any new ways of discerning it, compared to what has already been tried, for example by Isthmius.
As I've already mentioned, my progress can be monitored in my MRL subproject, which contains rather only the publicly available information. New information shall be published there in a shortened form, once it passes the according reviews.
Except for the plots already visible in the above subproject, the finalized highlight is the chronology of the updates of the related software, meld into a single timeline, using git log
, and text manipulation tools, like awk
& friends. The scripts, which produce this output will be shared a bit later, after I get them into a user-friendly state. The extracted dates were later placed in the plots as the colorful vertical lines, which I find very helpful:
Now here's the chronology itself. There I present just the final, yet broad conclusion, that indeed, if the differently calculated fees do leave fingerprints in the blockchain, then it can be clearly seen how long it can take for a given fork, to have its internals fully updated. In case of MyMonero, the final update occurred as recently as on the 2022-04-07, when the public interface of the MyMonero project, MyMonero app JS, was tagged as v1.2.6
, thus including @j-berman's pro-active fix #35, whearas the first public update of Monero Core, which introduced CLSAG, dates back to 2020-09-15 and was tagged as v0.17.0.0
(Oxygen Orion), making it about a year and a half delay.
Getting somewhat more into detail now, here is the full agglomerate of the relevant software, all updates of which are within a common time line. Whenever no tags are/were produced by a given repository, the commits were extracted instead, as in the case of mm-core-cpp-c
, which stands for MyMonero-core-cpp commits.
The release management of MyMonero, or rather the part of it, that's relevant to us ATM, appears to be the following:
The MyMonero App JS
is the public/user interface, which integrates updates from other subprojects with a delay. The delays naturally concern every subproject and are being propagated. The most relevant such subproject is the aforementioned MyMonero Core CPP. This subproject has 2 main responsibilites:
wallet2
class. This makes sense from an architectural PoV, as MyMonero IS a wallet of itself. Just for the completeness' sake, the MyMonero's equivalence of wallet2
appears to reside server-side (if I'm not mistaken), therefore it's not to be seen in the distributed source code, meant for the End-User.Monero
(over the monero-core-custom
fork to be precise) for the ease of the integration with the wallet
of their own.Having this scheme in mind, it's easier to explain the problem: In order to filter out the irrelevant wallet2
class, the entire wallet2.cpp
source file is being filtered out in monero-core-custom. This normally should pose no problem at all, but in case of Monero, for a reason unknown to me, the wallet2.cpp
file contains a couple of crucial, globally-defined functions, like the calculate_fee(), and similar. They are globally-defined, meaning, that they reside within a globally accessible memory space, yet they aren't globally-declared - as they should be - in the wallet2.h
header file. This means, that they are effectively inaccessible from outside of the wallet2.cpp
file. Even if the wallet2.cpp
and wallet2.h
files were to be distributed in a project like MyMonero, although this would be an architectural mistake, these projects would still not be able to use the functions like calculate_fee()
, since enabling this would have to mean performing an inelegant intrusion into the wallet2.cpp
and wallet2.h
, in order to expose these functions.
In case of MyMonero (and possibly other forks as well), the only seemingly elegant solution was to reimplement the calculate_fee()
and similar functions, as we can see here. Exactly this reimplementation is what lead to the delay of ~1.5 year, propagated from monero-core-custom
through MyMonero Core CPP
and finally to MyMonero app JS
.
It was easier for me to learn the reasons for these decisions and delays while I was working on my patch for testing the changes quickly, as the compilation (parsing, specifically) time of wallet2.h
is unbearable. Here it can seen what intrusions I had to do to the wallet2.h
, and later to wallet2.cpp
, in order to be able to expose the relevant functions to test, like calculate_fee()
. While I may allow myself to make such modifications for a cheeky patch, I doubt that I'd want to do the same for a project like MyMonero, meant to be distributed to End-Users, especially if I were initially unaware of what fingerprinting problems my calculate_fee()
reimplementations could pose in the future.
My proposed solution would be to completely extract the global functions like calculate_fee()
outside of the wallet2.cpp
source file into their own library, alongside basic libraries like cryptonote_basic
, which could then be directly and almost instantly copied into the forks, without having to drag all the unrelated wallet2
class' dependencies. Then the forks' publicly exposed interfaces, like MyMonero App JS, would simply tag the update, effectively greatly limiting the cumulative delay and thus the potential fingerprinting.
I'd be grateful for a review of this strategy and giving me a go as part of this MAGIC proposal, either in parallel to, or after the actual act of detecting the fingerprints on Monero Blockchain, which shall be my very next focus.
I'm sending the majority of the information via e-mail, that answers the questions asked here plus a few other ones, that are specific to this request.
Candidate Background
Project Description
Generally speaking fortifying Monero against statistical attacks and providing tooling for independent research and model stress-testing:
tsqsim
.Technical Approach
Please see the e-mail. I expect to collect the funds via an individual fundraiser for my work alone and only hope to be checked, approved and displayed on the MAGIC website.
Milestones and Budget
Please see the e-mail. I'd expect the total budget for my work to be 19,200.00 $ (nineteen thousand, two hundred US Dollars) for a 3 month work period.