domokane / FinancePy

A Python Finance Library that focuses on the pricing and risk-management of Financial Derivatives, including fixed-income, equity, FX and credit derivatives.
GNU General Public License v3.0
2.16k stars 322 forks source link

test invokation (use pytest?) #74

Closed epogrebnyak closed 3 years ago

epogrebnyak commented 3 years ago

Sorry to drop in with random observations - there is a lot of manual work that you do with to make tests run, including tweaking on sys.path. If the project is installable as a package (yours is), the tests can be discovered and invoked with single command pytest. Writing our own test invoker and test progress demonstration is not a good use of your time.

Will check if this can be simplified.

https://github.com/domokane/FinancePy/blob/6789f6a338793ba3c78aa74ef9dedca65fe2cd0e/tests/runAllTests.py#L1-L75

domokane commented 3 years ago

My testing approach is quite specific to the nature of the project. Each test file produces a lot of output given that it cycles over many of the inputs of the pricing model. Too much data is produced to reasonably put in an assert. My aim is to detect any changes quickly and with a way to see what they are.

epogrebnyak commented 3 years ago

Tests need a kind of clear resolution - one must know if it passes or fails

вт, 23 февр. 2021 г., 16:43 domokane notifications@github.com:

My testing approach is quite specific to the nature of the project. Each test file produces a lot of output given that it cycles over many of the inputs of the pricing model. Too much data is produced to reasonably put in an assert. My aim is to detect any changes quickly and with a way to see what they are.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/domokane/FinancePy/issues/74#issuecomment-784211647, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACGWBLR5CNOEBEBBGVIR2HDTAOWGRANCNFSM4YCHTOJQ .

epogrebnyak commented 3 years ago

Looking into https://github.com/domokane/FinancePy/blob/master/tests/FinTestCases.py... I think this tries to make a runner for goldplate with options to generate or compare output. This should be relevant for a data-intensive part of the program, but not all of the library, whci you can cover with more simple unit tests (and pytest).

For goldplate tests there also dedicated runners

epogrebnyak commented 3 years ago

Looking at the tests more a bit more deeply - you seem to put cycles of work into one "test" and then compare the logger outputs in golden/compare folders.

In this part of test https://github.com/domokane/FinancePy/blob/6789f6a338793ba3c78aa74ef9dedca65fe2cd0e/tests/TestFinEquityVanillaOption.py#L25-L85


# EP: This is not actually a single test, this a group of test 
def test_FinEquityVanillaOption():

    # EP: this is the test setup/fixture
    valueDate = FinDate(2015, 1, 1)
    expiryDate = FinDate(2015, 7, 1)
    stockPrice = 100
    volatility = 0.30
    interestRate = 0.05
    dividendYield = 0.01
    model = FinModelBlackScholes(volatility)
    discountCurve = FinDiscountCurveFlat(valueDate, interestRate)
    dividendCurve = FinDiscountCurveFlat(valueDate, dividendYield)
    european_call_option  =  FinEquityVanillaOption(
            expiryDate, 100.0, FinOptionTypes.EUROPEAN_CALL)

    # this is first test:
    numPathsList = [10000, 20000, 40000, 80000, 160000, 320000]
    # testCases.header is logger.log()
    testCases.header("NUMPATHS", "VALUE_BS", "VALUE_MC", "TIME")

    for numPaths in numPathsList:
        value = callOption.value(valueDate, stockPrice, discountCurve,
                                 dividendCurve, model)
        start = time.time()
        valueMC = callOption.valueMC(valueDate, stockPrice, discountCurve,
                                     dividendCurve, model, numPaths)
        end = time.time()
        duration = end - start
        # testCases.print is logger.log()
        testCases.print(numPaths, value, valueMC, duration)

    # this is second test, it must run in isolation
    stockPrices = range(80, 120, 10)
    numPaths = 100000
    testCases.header("NUMPATHS", "CALL_VALUE_BS", "CALL_VALUE_MC", 
                     "CALL_VALUE_MC_SOBOL", "TIME")
    useSobol = True

    for stockPrice in stockPrices:

        value = callOption.value(valueDate, stockPrice, discountCurve,
                                 dividendCurve, model)

        start = time.time()

        useSobol = False
        valueMC1 = callOption.valueMC(valueDate, stockPrice, discountCurve,
                                      dividendCurve, model, numPaths, useSobol)

        useSobol = True
        valueMC2 = callOption.valueMC(valueDate, stockPrice, discountCurve,
                                      dividendCurve, model, numPaths, useSobol)
        end = time.time()
        duration = end - start
        testCases.print(numPaths, value, valueMC1, valueMC2, duration)
epogrebnyak commented 3 years ago
domokane commented 3 years ago

I am fine with if statements for option payoffs. I don't want to create a class for every single payoff. That would cause a huge amount of duplication. Some products can have eight option types.

epogrebnyak commented 3 years ago
Looking in folder: tests
TEST:   1 out of 102: MODULE: TestFinAmount                       WARNINGS:   0   ERRORS:   0 
TEST:   2 out of 102: MODULE: TestFinBond                         WARNINGS:   0   ERRORS:   0 
TEST:   3 out of 102: MODULE: TestFinBondAnnuity                  WARNINGS:   0   ERRORS:   0 
TEST:   4 out of 102: MODULE: TestFinBondConvertible              WARNINGS:   7   ERRORS:   7 ******* 
epogrebnyak commented 3 years ago

I am fine with if statements for option payoffs. I don't want to create a class for every single payoff. That would cause a huge amount of duplication. Some products can have eight option types.

Ok, so at least that is not accidental and comes by design.

domokane commented 3 years ago

A test that shows an error is not meaningless. When a test shows an error that means that the output numbers have changed from the golden file. I may know what the error is and that it is not serious but not have time to update the golden file. So the error is still reported until I do that. If I do not know the reason then I investigate. If it is an acceptable change then I reset the golden file so that the error no longer triggers. The reason you see an error is because I have not rerun the golden file even though this error is not material.

domokane commented 3 years ago

Most of the issues that you point out are not accidental. For example relative references was done to make it easier to move modules when the project was starting. Now that the project is more mature I am happy to hard code them as absolute references.

epogrebnyak commented 3 years ago

A test that shows an error is not meaningless. When a test shows an error that means that the output numbers have changed from the golden file. I may know what the error is and that it is not serious but not have time to update the golden file. So the error is still reported until I do that. If I do not know the reason then I investigate. If it is an acceptable change then I reset the golden file so that the error no longer triggers. The reason you see an error is because I have not rerun the golden file even though this error is not material.

That is the problem (and benefit) with golden tests - you kind of need judgement to see what is an accepted result. The unit tests however should end with pass/fail result. Why running the golden tests for 5 minutes if your result is to see logs accumulating for investigation?

Unit tests provide less guarantees for correctness, they are responsble for interfaces and simple behaviours, but they must capture something like below (error when creating documentation):

WARNING: autodoc: failed to import module 'FinBondPortfolio' from module 'financepy.products.bonds'; the following exception was raised:
Traceback (most recent call last):
  File "D:\Anaconda3\lib\site-packages\sphinx\ext\autodoc\importer.py", line 66, in import_module
    return importlib.import_module(modname)
  File "D:\Anaconda3\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 783, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "d:\github\financepy\financepy\products\bonds\FinBondPortfolio.py", line 26, in <module>
    class FinBondPortfolio(object):
  File "d:\github\financepy\financepy\products\bonds\FinBondPortfolio.py", line 54, in FinBondPortfolio
    convention: FinYTMCalcType = FinYTMCalcType.UK_DMO):
NameError: name 'FinYTMCalcType' is not defined
epogrebnyak commented 3 years ago

Usually a test raises an exception on error, if it passes, the error becomes a warning. Tests that go on after an error are a bit different from unittest/pytest etc they are validation, profiling, etc. Maybe it is a good idea to separate that golden/validation cycle from unit tests.

ср, 24 февр. 2021 г., 0:49 domokane notifications@github.com:

A test that shows an error is not meaningless. When a test show an error that means that the output numbers have changed from the golden file. I may know what the error is and that it is not serious but not have time to update the golden file. So the error is still reported until I do that. If I do not know the reason then I investigate. If it is an acceptable change then I reset the golden file so that the error no longer triggers. The reason you see an error is because I have not rerun the golden file even though this error is not material.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/domokane/FinancePy/issues/74#issuecomment-784536134, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACGWBLVELUTENTEZVPM54RLTAQPE3ANCNFSM4YCHTOJQ .

domokane commented 3 years ago

Sure. I am not disagreeing. I am happy if we can add unit tests now.

Just realise that when there are many dimensions to an option price and I am doing certain model tests (not unit tests) and the model code is evolving quickly, it is nice to autogenerate test output that I can later use to to see what changes or breaks when a change is made. The warnings also highlight big speed differences.

PS The class FinBondPortfolio is under construction. I just fixed the bug.

epogrebnyak commented 3 years ago

Related issues:

epogrebnyak commented 3 years ago

Implemented via # 86