azmyrajab / polars_ols

Polars least squares extension - enables fast linear model polar expressions
MIT License
102 stars 9 forks source link

Assertion failed: qr_factors.nrows() >= qr_factors.ncols() #3

Closed wukan1986 closed 5 months ago

wukan1986 commented 5 months ago
import polars as pl
import polars_ols as pls  # noqa

df = pl.DataFrame({
    "A": [9, 10, 11, 12],
    "B": [1, 2, 3, 4],
}).with_row_index()

df = df.with_columns(df.to_dummies('B'))
df = df.with_columns(pl.col('A').rolling_mean(3).alias('C'))
df = df.with_columns(pls.compute_least_squares(pl.col('A'),
                                               pl.col('B_1'), pl.col('B_2'), pl.col('B_3'), pl.col('C'),
                                               mode='residuals', null_policy='drop').alias('resid'))
print(df)

the output is:

panicked at src\least_squares.rs:36:36:
Assertion failed at C:\Users\runneradmin\.cargo\registry\src\index.crates.io-6f17d22bba15001f\faer-0.18.2\src\linalg\qr\no_pivoting\solve.rs:94:5
Assertion failed: qr_factors.nrows() >= qr_factors.ncols()
- qr_factors.nrows() = 2
- qr_factors.ncols() = 4
Traceback (most recent call last):

I hope it like np.linalg.lstsq

Else, x minimizes the Euclidean 2-norm . If there are multiple minimizing solutions, the one with the smallest 2-norm is returned.

See also: https://numpy.org/doc/stable/reference/generated/numpy.linalg.lstsq.html https://github.com/abstractqqq/polars_ds_extension/issues/94

azmyrajab commented 5 months ago

Hi! Thanks for raising this, it should be easy for me to handle overdetermined systems with the rust lstsq solver: I’ll put up something later today and message you once the new release is ready!

azmyrajab commented 5 months ago

Hi @wukan1986, it took a while but made the required refactors to:

  1. Expose solve_method a literal which provides the user with control over choices of numerical algorithms per supported model. For example for plain OLS you could choose between "svd" (which calls the same LAPACK SVD based GELSD algorithm as np.linalg.lstsq, and is a good choice for over-determined systems) vs or "qr" which is more performant normally whilst being generally numerically stable - these tend to be the two main approaches preferred. Something like "chol" (cholesky) will likely be the fastest by directly solving normal equations, but at the expense of less numerical stability: but is something a user can toggle as well.

    For this open issue specifically - it should solve your problem because it should know when n_cols > n_rows it should use SVD (akin to lstsq) by default so your error should go away once upgrade is released.

    For examples and further detail around corner cases, pertaining to solve method, see: test_fit_wide, test_fit_multi_collinear, and test_ols under tests.

  2. Refactor / update how null_policy works (linked to your latter link in polars-ds). What I had before worked well for computing "coefficients" in a simple select context: for e.g. it just dropped rows if user specified null_policy="drop" and returned the single coefficient struct estimated. But if you set mode="predictions" or mode="residuals" your output would resize as well which is not ideal and prevents proper broadcasting under .with_columns context there.

    It now operates in Rust layer and basically differentiates between null handling / filtering of data for 1. fitting vs 2. prediction. This was needed to be able to for e.g. under null_policy="drop" to filter x and y prior to computing coefficients then use these coefficients on original (zero-filled) data to predict or compute residuals in a manner that maintains the shape of the original data.

It might be easier to understand with an example. This is the equivalent logic in numpy which mode="drop" is tested against:

# 1) compute mask, drop invalid rows, compute coefficients
is_valid = ~np.isnan(x).any(axis=1) & ~np.isnan(y)
x_fit, y_fit = x[is_valid, :], y[is_valid]
coef = np.linalg.lstsq(x_fit, y_fit, rcond=None)[0]

# in order to broadcast (valid) predictions to the dimensions of original data; we must
# use the original (un-dropped) x. most reasonable behaviour for linear models is:
# a) always produce predictions, even for cases where target was null
# (allows extrapolation) &
# b) for residuals, by default, one wants to retain nulls of targets
# Thus the logic below:
is_nan_x = np.isnan(x)
x_predict = x.copy()
x_predict[is_nan_x] = 0.0  # fill x nans with zero
predictions_expected = x_predict @ coef  # allows extrapolation of predictions on missing data 
residuals = y - predictions_expected  # keep the nulls of y in residuals by default

Equivalent polars-ols code:

predictions = df.select(
    predictions=pl.col("y").least_squares.ols(
        pl.col("x1"), pl.col("x2"), null_policy="drop", mode="predictions"  # or "residuals" 
    )
)

Apart from "drop" you can choose "drop_y_zero_x" which computes the validity mask on targets only and applies it to both targets and features, and zeros out any remaining nulls. And "zero" which simply zero fills everything prior to fitting.

Now if you have some specific null processing filter that is not supported, you could also always do the general fit then predict workflow:

coefficients_train = df.pipe(fancy_filter).select(index_col, pl.col("y").least_squares.ols(..., mode="coefficients"))

# replace df with df_test if predicting on out-of-sample data instead of e.g. xs regressions
coefficients_train.join(df, on=index_col).select(
  (pl.col("y") - pl.col("coefficients").least_squares.predict(...)).alias("residuals_test")
)

Replace (...) with your feature column expressions. This would give you full control and the predict is also in rust so should be decently fast and all of this is lazy friendly.

Please have a look at these changes, and if it looks good to you - I'll work towards releasing so you can upgrade

wukan1986 commented 5 months ago

Hi, @azmyrajab , I clone and run it in WSL, throw undefined symbol: cblas_sgemv

(py311) kan@DESKTOP-7DFCADR:~/test1$ python c.py
Traceback (most recent call last):
  File "/home/kan/test1/c.py", line 12, in <module>
    df = df.with_columns(pls.compute_least_squares(pl.col('A'),
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/kan/miniconda3/envs/py311/lib/python3.11/site-packages/polars/dataframe/frame.py", line 8297, in with_columns
    return self.lazy().with_columns(*exprs, **named_exprs).collect(_eager=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/kan/miniconda3/envs/py311/lib/python3.11/site-packages/polars/lazyframe/frame.py", line 1943, in collect
    return wrap_df(ldf.collect())
                   ^^^^^^^^^^^^^
polars.exceptions.ComputeError: error loading dynamic library: /home/kan/test1/polars_ols/polars_ols/_polars_ols.abi3.so: undefined symbol: cblas_sgemv

Error originated just after this operation:
DF ["index", "A", "B", "B_1"]; PROJECT */8 COLUMNS; SELECTION: "None"
import polars as pl
import polars_ols as pls  # noqa
from polars_ols.least_squares import OLSKwargs

df = pl.DataFrame({
    "A": [9, 10, 11, 12],
    "B": [1, 2, 3, 4],
}).with_row_index()

df = df.with_columns(df.to_dummies('B'))
df = df.with_columns(pl.col('A').rolling_mean(3).alias('C'))
df = df.with_columns(pls.compute_least_squares(pl.col('A'),
                                               pl.col('B_1'), pl.col('B_2'), pl.col('B_3'), pl.col('C'),
                                               mode='residuals', ols_kwargs=OLSKwargs(null_policy='drop', solve_method='svd')).alias('resid'))
print(df)
azmyrajab commented 5 months ago

HI @wukan1986, sorry this was because I tried implementing the direct equivalent to "np.linalg.lstsq" which is a wrapper to LAPACK GELSD which solves a possibly over-determined least squares with an efficiently implemented divide and conquer SVD.

Unfortunately, the rust libraries which provide a (nice) interface to it like ndarray-linalg have been very complex to get to build correctly, in a clean manner, across all operating systems because of the BLAS or Intel MKL dependencies they ship with. Its non-trivial to install something like openblas for the user, in github actions, universally across all OS

So what I did for now is implement the following logic: for macOS (which always comes with required BLAS dependencies): use the faster LAPACK direct lstsq port; so it is slightly faster on that OS. For linux and windows I fall back to the SVD implementation I wrote for Ridge. It is a little slower than direct LAPACK call but isn't too bad; in my benchmarks it should be still faster than a python call to numpy lstsq by a factor of about 2x.

I tested this (my) SVD implementation in some degenerate / multicollinear cases as well as wide matrices (p >> n) and it matches numpy lstsq calls there.

I'll run a few additional tests and release a new version soon - but feel free to checkout and try again if you have the bandwidth.

Thanks,


This is the implementation for reference (which is close to sklearn's Ridge solver="svd" implementation, but runs meaningfully faster being in Rust). It handles RCOND the same way np.linalg.lstsq does so to guarentee similar results

fn solve_ridge_svd(
    y: &Array1<f32>,
    x: &Array2<f32>,
    alpha: f32,
    rcond: Option<f32>,
) -> Array1<f32> {
    let x_faer = x.view().into_faer();
    let y_faer = y.view().insert_axis(Axis(1)).into_faer();

    // compute SVD and extract u, s, vt
    let svd = x_faer.thin_svd();
    let u = svd.u();
    let v = svd.v().into_ndarray();
    let s = svd.s_diagonal();

    // convert s into ndarray
    let s: Array1<f32> = s.as_2d().into_ndarray().slice(s![.., 0]).into_owned();
    let max_value = s.iter().skip(1).copied().fold(s[0], f32::max);

    // set singular values less than or equal to ``rcond * largest_singular_value`` to zero.
    let cutoff =
        rcond.unwrap_or(f32::EPSILON * max(x_faer.ncols(), x_faer.nrows()) as f32) * max_value;
    let s = s.map(|v| if v < &cutoff { 0. } else { *v });

    let binding = u.transpose() * y_faer;
    let u_t_y: Array1<f32> = binding
        .as_ref()
        .into_ndarray()
        .slice(s![.., 0])
        .into_owned();
    let d = &s / (&s * &s + alpha);
    let d_ut_y = &d * &u_t_y;
    v.dot(&d_ut_y)
}
azmyrajab commented 5 months ago

I'll also try to get linux/windows to also run on LAPACK wrapper if I get some more bandwidth later this week. have some ideas on how to sidestep all the additional BLAS dependencies

wukan1986 commented 5 months ago

Great job!!! It can run in WSL. but null_policy='drop' not work.

import polars as pl
import polars_ols as pls  # noqa
from polars_ols.least_squares import OLSKwargs

df = pl.DataFrame({
    "A": [9, 10, 11, 12],
    "B": [1, 2, 3, 4],
}).with_row_index()

df = df.with_columns(df.to_dummies('B'))
df = df.with_columns(pl.col('A').rolling_mean(3).alias('C'))
df = df.with_columns(pls.compute_least_squares(pl.col('A'),
                                               pl.col('B_1'), pl.col('B_2'), pl.col('B_3'), pl.col('C'),
                                               mode='residuals', ols_kwargs=OLSKwargs(null_policy='drop', solve_method='svd')).alias('resid'))
print(df)
"""
shape: (4, 9)
┌───────┬─────┬─────┬─────┬───┬─────┬─────┬──────┬────────────┐
│ index ┆ A   ┆ B   ┆ B_1 ┆ … ┆ B_3 ┆ B_4 ┆ C    ┆ resid      │
│ ---   ┆ --- ┆ --- ┆ --- ┆   ┆ --- ┆ --- ┆ ---  ┆ ---        │
│ u32   ┆ i64 ┆ i64 ┆ u8  ┆   ┆ u8  ┆ u8  ┆ f64  ┆ f32        │
╞═══════╪═════╪═════╪═════╪═══╪═════╪═════╪══════╪════════════╡
│ 0     ┆ 9   ┆ 1   ┆ 1   ┆ … ┆ 0   ┆ 0   ┆ null ┆ 9.0        │
│ 1     ┆ 10  ┆ 2   ┆ 0   ┆ … ┆ 0   ┆ 0   ┆ null ┆ 10.0       │
│ 2     ┆ 11  ┆ 3   ┆ 0   ┆ … ┆ 1   ┆ 0   ┆ 10.0 ┆ -9.5367e-7 │
│ 3     ┆ 12  ┆ 4   ┆ 0   ┆ … ┆ 0   ┆ 1   ┆ 11.0 ┆ 9.5367e-7  │
└─────

I hope the resid is:

null
null
-9.5367e-7
9.5367e-7
azmyrajab commented 5 months ago

Thanks @wukan1986!

So, the current behaviour you see in your example was sort of intended, at least the way I thought it should work; but perhaps I can expose another option which drops and propagates any nulls (on both features and targets).

The current behaviour drops nulls in fitting, obtains the coefficients, dots them with zero'd features to always produce valid predictions, then computes residuals as y - y_hat (similar to that numpy equivalent snippet I shared with you):

# in order to broadcast (valid) predictions to the dimensions of original data; we must
# use the original (un-dropped) x. most reasonable behaviour for linear models is:
# a) always produce predictions, even for cases where target was null
# (allows extrapolation) &
# b) for residuals, by default, one wants to retain nulls of targets
# Thus the logic below:
is_nan_x = np.isnan(x)
x_predict = x.copy()
x_predict[is_nan_x] = 0.0  # fill x nans with zero 
predictions_expected = x_predict @ coef  # allows extrapolation of predictions on missing data 
residuals = y - predictions_expected  # keep the nulls of y in residuals by default
  1. So if C has some nulls, and it was the target onto A & Bs, we'd see its nulls propagate onto residuals:

    import polars as pl
    import polars_ols as pls  # noqa
    from polars_ols.least_squares import OLSKwargs
    df = pl.DataFrame({
    "A": [9, 10, 11, 12],
    "B": [1, 2, 3, 4],
    }).with_row_index()
    df = df.with_columns(df.to_dummies('B'))
    df = df.with_columns(pl.col('A').rolling_mean(3).alias('C'))
    df = df.with_columns(pls.compute_least_squares(pl.col('C'),
                                               pl.col('B_1'), pl.col('B_2'), pl.col('B_3'), pl.col('A'),
                                               mode='residuals', ols_kwargs=OLSKwargs(null_policy='drop', solve_method='svd')).alias('resid'))
    print(df)
    shape: (4, 9)
    ┌───────┬─────┬─────┬─────┬───┬─────┬─────┬──────┬───────────┐
    │ index ┆ A   ┆ B   ┆ B_1 ┆ … ┆ B_3 ┆ B_4 ┆ C    ┆ resid     │
    │ ---   ┆ --- ┆ --- ┆ --- ┆   ┆ --- ┆ --- ┆ ---  ┆ ---       │
    │ u32   ┆ i64 ┆ i64 ┆ u8  ┆   ┆ u8  ┆ u8  ┆ f64  ┆ f32       │
    ╞═══════╪═════╪═════╪═════╪═══╪═════╪═════╪══════╪═══════════╡
    │ 0     ┆ 9   ┆ 1   ┆ 1   ┆ … ┆ 0   ┆ 0   ┆ null ┆ null      │
    │ 1     ┆ 10  ┆ 2   ┆ 0   ┆ … ┆ 0   ┆ 0   ┆ null ┆ null      │
    │ 2     ┆ 11  ┆ 3   ┆ 0   ┆ … ┆ 1   ┆ 0   ┆ 10.0 ┆ -0.000003 │
    │ 3     ┆ 12  ┆ 4   ┆ 0   ┆ … ┆ 0   ┆ 1   ┆ 11.0 ┆ 0.0       │
    └───────┴─────┴─────┴─────┴───┴─────┴─────┴──────┴───────────┘
  2. But if A which, does not have nulls, is regressed on some features with nulls: the coefficients will be based on dropped data but the predictions would always try to extrapolate, and so valid residuals will result. See below example

ols_kwargs = OLSKwargs(null_policy='drop', solve_method='svd')

df = df.with_columns(
    pls.compute_least_squares(pl.col('A'), pl.col('B_1'), pl.col('B_2'), pl.col('B_3'), pl.col('C'),
                             mode='coefficients', ols_kwargs=ols_kwargs
                              ).alias('coefficients'),
    pls.compute_least_squares(pl.col('A'), pl.col('B_1'), pl.col('B_2'), pl.col('B_3'), pl.col('C'),
                              mode='predictions', ols_kwargs=ols_kwargs
                              ).alias('predictions'),
    pls.compute_least_squares(pl.col('A'), pl.col('B_1'), pl.col('B_2'), pl.col('B_3'), pl.col('C'),
                              mode='residuals', ols_kwargs=ols_kwargs
                              ).alias('residuals'),
)
print(df)
shape: (4, 12)
┌───────┬─────┬─────┬─────┬───┬─────────────────────┬────────────────────┬─────────────┬───────────┐
│ index ┆ A   ┆ B   ┆ B_1 ┆ … ┆ resid               ┆ coefficients       ┆ predictions ┆ residuals │
│ ---   ┆ --- ┆ --- ┆ --- ┆   ┆ ---                 ┆ ---                ┆ ---         ┆ ---       │
│ u32   ┆ i64 ┆ i64 ┆ u8  ┆   ┆ struct[4]           ┆ struct[4]          ┆ f32         ┆ f32       │
╞═══════╪═════╪═════╪═════╪═══╪═════════════════════╪════════════════════╪═════════════╪═══════════╡
│ 0     ┆ 9   ┆ 1   ┆ 1   ┆ … ┆ {0.0,0.0,0.09091,1. ┆ {0.0,0.0,0.09091,1 ┆ 0.0         ┆ 9.0       │
│       ┆     ┆     ┆     ┆   ┆ 090909}             ┆ .090909}           ┆             ┆           │
│ 1     ┆ 10  ┆ 2   ┆ 0   ┆ … ┆ {0.0,0.0,0.09091,1. ┆ {0.0,0.0,0.09091,1 ┆ 0.0         ┆ 10.0      │
│       ┆     ┆     ┆     ┆   ┆ 090909}             ┆ .090909}           ┆             ┆           │
│ 2     ┆ 11  ┆ 3   ┆ 0   ┆ … ┆ {0.0,0.0,0.09091,1. ┆ {0.0,0.0,0.09091,1 ┆ 10.999999   ┆ 9.5367e-7 │
│       ┆     ┆     ┆     ┆   ┆ 090909}             ┆ .090909}           ┆             ┆           │
│ 3     ┆ 12  ┆ 4   ┆ 0   ┆ … ┆ {0.0,0.0,0.09091,1. ┆ {0.0,0.0,0.09091,1 ┆ 11.999998   ┆ 0.000002  │
│       ┆     ┆     ┆     ┆   ┆ 090909}             ┆ .090909}           ┆             ┆           │
└───────┴─────┴─────┴─────┴───┴─────────────────────┴────────────────────┴─────────────┴───────────┘

The coefficients match those estimated dropping any invalid rows (i.e. last two rows in this example) but predictions match zero'ing out features then dot with those coefficients. The reason for this behaviour was to allow predicting something on data with a few null features but otherwise valid data.


Shall we expose a "drop_propagate" option which drops nulls in fitting and applies the same validity mask on predictions so that any row with any null (feature or target) will always result in null predictions and null residuals? Or do you believe it's better to reconsider the current default behaviour so that "drop" always propagates any nulls (and perhaps "drop_zero" does what I wanted for my usecase)?

I'm happy either way as the actual change is simple to do quickly, thanks!

wukan1986 commented 5 months ago

Hi @azmyrajab , The residuals are usually normally distributed, and 9,10 are very obvious outliers, so I think return null is better than 9,10.

Adding a new option may be better

azmyrajab commented 5 months ago

Okay - done, once 0.2.8 clears we'll have:

NullPolicy = Literal[
    "zero",  # simply zero fills nulls in both targets & features
    "drop",  # drops any rows with nulls in fitting and masks associated predictions with nulls
    "ignore",  # use this option if nulls are already handled upstream
    "drop_zero",  # drops any rows with nulls in fitting, but then computes predictions
    # with zero filled features. Use this to allow for extrapolation.
    "drop_y_zero_x",  # only drops rows with null targets and fill any null features with zero prior to prediction
]

And so now your example with "drop" will do what you expected it to:

import polars as pl
import polars_ols as pls  # noqa
from polars_ols.least_squares import OLSKwargs
df = pl.DataFrame({
    "A": [9, 10, 11, 12],
    "B": [1, 2, 3, 4],
}).with_row_index()
df = df.with_columns(df.to_dummies('B'))
df = df.with_columns(pl.col('A').rolling_mean(3).alias('C'))
ols_kwargs = OLSKwargs(null_policy='drop', solve_method='svd')
df = df.with_columns(
    pls.compute_least_squares(pl.col('A'), pl.col('B_1'), pl.col('B_2'), pl.col('B_3'), pl.col('C'),
                              mode='coefficients', ols_kwargs=ols_kwargs
                              ).alias('coefficients'),
    pls.compute_least_squares(pl.col('A'), pl.col('B_1'), pl.col('B_2'), pl.col('B_3'), pl.col('C'),
                              mode='predictions', ols_kwargs=ols_kwargs
                              ).alias('predictions'),
    pls.compute_least_squares(pl.col('A'), pl.col('B_1'), pl.col('B_2'), pl.col('B_3'), pl.col('C'),
                              mode='residuals', ols_kwargs=ols_kwargs
                              ).alias('residuals'),
)
print(df)
shape: (4, 11)
┌───────┬─────┬─────┬─────┬───┬──────┬────────────────────────────┬─────────────┬───────────┐
│ index ┆ A   ┆ B   ┆ B_1 ┆ … ┆ C    ┆ coefficients               ┆ predictions ┆ residuals │
│ ---   ┆ --- ┆ --- ┆ --- ┆   ┆ ---  ┆ ---                        ┆ ---         ┆ ---       │
│ u32   ┆ i64 ┆ i64 ┆ u8  ┆   ┆ f64  ┆ struct[4]                  ┆ f32         ┆ f32       │
╞═══════╪═════╪═════╪═════╪═══╪══════╪════════════════════════════╪═════════════╪═══════════╡
│ 0     ┆ 9   ┆ 1   ┆ 1   ┆ … ┆ null ┆ {0.0,0.0,0.09091,1.090909} ┆ null        ┆ null      │
│ 1     ┆ 10  ┆ 2   ┆ 0   ┆ … ┆ null ┆ {0.0,0.0,0.09091,1.090909} ┆ null        ┆ null      │
│ 2     ┆ 11  ┆ 3   ┆ 0   ┆ … ┆ 10.0 ┆ {0.0,0.0,0.09091,1.090909} ┆ 10.999999   ┆ 9.5367e-7 │
│ 3     ┆ 12  ┆ 4   ┆ 0   ┆ … ┆ 11.0 ┆ {0.0,0.0,0.09091,1.090909} ┆ 11.999998   ┆ 0.000002  │
└───────┴─────┴─────┴─────┴───┴──────┴────────────────────────────┴─────────────┴───────────┘

And the new option to fit on dropped data then predict, using those estimated coefficients, on zero'd out features - to allow extrapolation behaviour: mode="drop_zero"

shape: (4, 11)
┌───────┬─────┬─────┬─────┬───┬──────┬────────────────────────────┬─────────────┬───────────┐
│ index ┆ A   ┆ B   ┆ B_1 ┆ … ┆ C    ┆ coefficients               ┆ predictions ┆ residuals │
│ ---   ┆ --- ┆ --- ┆ --- ┆   ┆ ---  ┆ ---                        ┆ ---         ┆ ---       │
│ u32   ┆ i64 ┆ i64 ┆ u8  ┆   ┆ f64  ┆ struct[4]                  ┆ f32         ┆ f32       │
╞═══════╪═════╪═════╪═════╪═══╪══════╪════════════════════════════╪═════════════╪═══════════╡
│ 0     ┆ 9   ┆ 1   ┆ 1   ┆ … ┆ null ┆ {0.0,0.0,0.09091,1.090909} ┆ 0.0         ┆ 9.0       │
│ 1     ┆ 10  ┆ 2   ┆ 0   ┆ … ┆ null ┆ {0.0,0.0,0.09091,1.090909} ┆ 0.0         ┆ 10.0      │
│ 2     ┆ 11  ┆ 3   ┆ 0   ┆ … ┆ 10.0 ┆ {0.0,0.0,0.09091,1.090909} ┆ 10.999999   ┆ 9.5367e-7 │
│ 3     ┆ 12  ┆ 4   ┆ 0   ┆ … ┆ 11.0 ┆ {0.0,0.0,0.09091,1.090909} ┆ 11.999998   ┆ 0.000002  │
└───────┴─────┴─────┴─────┴───┴──────┴────────────────────────────┴─────────────┴───────────┘

The test test_fit_missing_data_predictions_and_residuals [test_ols (https://github.com/azmyrajab/polars_ols/blob/main/tests/test_ols.py) should document all null handling behaviours

wukan1986 commented 5 months ago

Thank you very much, I will use it in my project

azmyrajab commented 5 months ago

👍👍