Closed ghost closed 4 years ago
@mgzde Can you please provide the following information:
I've installed git and CMake(3.16.6) I've installed Visual Studio Code (April 2020 version) (Actually, I want to use lightgbm in RStudio, so I have RStudio 3.6.3) I've also installed RTools35 I've added C:\Program Files\Rtools\bin to path Then I've run the following code git clone --recursive https://github.com/microsoft/LightGBM cd LightGBM Rscript build_r.R
Then I've tried in R with the following codes and outputs for agaricus data; at the end it says ; 'There is no applicable method for predict '(applied to an object of class "NULL") with "'predict' için uygulanabilir bir metod yok ("NULL" sınıfının bir nesnesine uygulanan)"
After that I've tried with bank dataset; And it gives me a error such that; Error in lgb.dump(model, num_iteration = 1) : lgb.save: booster should be an ‘lgb.Booster’ Also, it's model information is;
Thank you @mgzde , I will take a look in a few hours and see if I can replicate this.
In the future, please paste logs as text and code in code blocks instead of screenshots. That makes it easier for me to copy your code and run it (with screenshots, I will have to re-type it by hand), and makes it more likely that other people who face the same issue as you will find this issue from search engines.
If you've never used the formatting options in GitHub comments, you might explore https://help.github.com/en/github/writing-on-github/basic-writing-and-formatting-syntax
I'm so sorry for this. I'm really new about github so I do not know the formattings. I can write the codes again as text instead of screenshot now.
`#LIGHTGBM library(lightgbm) library(methods)
data(agaricus.train, package = "lightgbm") data(agaricus.test, package = "lightgbm") train <- agaricus.train test <- agaricus.test
class(train$label) class(train$data)
print("Training lightgbm with sparseMatrix") bst <- lightgbm( data = train$data , label = train$label , num_leaves = 4L , learning_rate = 1.0 , nrounds = 2L , objective = "binary" )
print("Training lightgbm with Matrix") bst <- lightgbm( data = as.matrix(train$data) , label = train$label , num_leaves = 4L , learning_rate = 1.0 , nrounds = 2L , objective = "binary" )
print("Training lightgbm with lgb.Dataset") dtrain <- lgb.Dataset( data = train$data , label = train$label ) bst <- lightgbm( data = dtrain , num_leaves = 4L , learning_rate = 1.0 , nrounds = 2L , objective = "binary" )
print("Train lightgbm with verbose 0, no message") bst <- lightgbm( data = dtrain , num_leaves = 4L , learning_rate = 1.0 , nrounds = 2L , objective = "binary" , verbose = 0L )
print("Train lightgbm with verbose 1, print evaluation metric") bst <- lightgbm( data = dtrain , num_leaves = 4L , learning_rate = 1.0 , nrounds = 2L , nthread = 2L , objective = "binary" , verbose = 1L )
print("Train lightgbm with verbose 2, also print information about tree") bst <- lightgbm( data = dtrain , num_leaves = 4L , learning_rate = 1.0 , nrounds = 2L , nthread = 2L , objective = "binary" , verbose = 2L )
pred <- predict(bst, test$data) err <- mean(as.numeric(pred > 0.5) != test$label) print(paste("test-error=", err))
lgb.save(bst, "lightgbm.model")
bst2 <- lgb.load("lightgbm.model") pred2 <- predict(bst2, test$data)
print(paste("sum(abs(pred2-pred))=", sum(abs(pred2 - pred))))
dtrain <- lgb.Dataset(data = train$data, label = train$label, free_raw_data = FALSE) dtest <- lgb.Dataset.create.valid(dtrain, data = test$data, label = test$label)
valids <- list(train = dtrain, test = dtest)
print("Train lightgbm using lgb.train with valids") bst <- lgb.train( data = dtrain , num_leaves = 4L , learning_rate = 1.0 , nrounds = 2L , valids = valids , nthread = 2L , objective = "binary" )
print("Train lightgbm using lgb.train with valids, watch logloss and error") bst <- lgb.train( data = dtrain , num_leaves = 4L , learning_rate = 1.0 , nrounds = 2L , valids = valids , eval = c("binary_error", "binary_logloss") , nthread = 2L , objective = "binary" )
lgb.Dataset.save(dtrain, "dtrain.buffer")
dtrain2 <- lgb.Dataset("dtrain.buffer") bst <- lgb.train( data = dtrain2 , num_leaves = 4L , learning_rate = 1.0 , nrounds = 2L , valids = valids , nthread = 2L , objective = "binary" )
label <- getinfo(dtest, "label") pred <- predict(bst, test$data) err <- as.numeric(sum(as.integer(pred > 0.5) != label)) / length(label) print(paste("test-error=", err))`
at the end it says ; 'There is no applicable method for predict '(applied to an object of class "NULL") with "'predict' için uygulanabilir bir metod yok ("NULL" sınıfının bir nesnesine uygulanan)"
After that I've tried in bank dataset;
`library(lightgbm) library(data.table) data(bank, package = "lightgbm") str(bank)
bank_train <- bank[1:4000, ] bank_test <- bank[4001:4521, ] head(bank_train) head(bank_test)
bank_rules <- lgb.prepare_rules(data = bank_train) bank_train <- bank_rules$data bank_test <- lgb.prepare_rules(data = bank_test, rules = bank_rules$rules)$data str(bank_test)
bank_train$y <- bank_train$y - 1 bank_test$y <- bank_test$y - 1
my_data_train <- as.matrix(bank_train[, 1:16, with = FALSE]) my_data_test <- as.matrix(bank_test[, 1:16, with = FALSE])
dtrain <- lgb.Dataset(data = my_data_train, label = bank_train$y) dtest <- lgb.Dataset(data = my_data_test, label = bank_test$y)
model <- lgb.train(list(objective = "binary", metric = "l2", min_data = 1, learning_rate = 0.1, min_data = 0, min_hessian = 1, max_depth = 2, categorical_feature = c(2, 3, 4, 5, 7, 8, 9, 11, 16)), dtrain, 100, valids = list(train = dtrain, valid = dtest))
lgb.dump(model, num_iteration = 1)
bank$y <- bank$y - 1
my_data <- as.matrix(bank[, 1:16, with = FALSE])
lgb_data <- lgb.Dataset(data = my_data,
categorical_feature = c(2, 3, 4, 5, 7, 8, 9, 11, 16))
model <- lgb.train(list(objective = "binary", metric = "l2", min_data = 1, learning_rate = 0.1, min_data = 0, min_hessian = 1, max_depth = 2), lgb_data, 100, valids = list(train = lgb_data))
lgb.dump(model, num_iteration = 1) ` And it gives me a error such that; Error in lgb.dump(model, num_iteration = 1) : lgb.save: booster should be an ‘lgb.Booster’
Also, it's model information is; Type: NULL Length: 0 Size: 0 B Value: NULL (empty)
No problem @mgzde ! Thanks for copying in the code. I really do recommend you read https://help.github.com/en/github/writing-on-github/basic-writing-and-formatting-syntax when you have the time.
the #
has a special meaning that is causing some of your code comments above to look like titles. I copied your comment and reformatted it below the line here. If you click ...
then Edit
on my comment, you'll be able to see the raw text I typed and how the formatting works.
#LIGHTGBM
library(lightgbm)
library(methods)
### We load in the agaricus dataset
### In this example, we are aiming to predict whether a mushroom is edible
data(agaricus.train, package = "lightgbm")
data(agaricus.test, package = "lightgbm")
train <- agaricus.train
test <- agaricus.test
### The loaded data is stored in sparseMatrix, and label is a numeric vector in {0,1}
class(train$label)
class(train$data)
#--------------------Basic Training using lightgbm----------------
### This is the basic usage of lightgbm you can put matrix in data field
### Note: we are putting in sparse matrix here, lightgbm naturally handles sparse input
### Use sparse matrix when your feature is sparse (e.g. when you are using one-hot encoding vector)
print("Training lightgbm with sparseMatrix")
bst <- lightgbm(
data = train$data
, label = train$label
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, objective = "binary"
)
### Alternatively, you can put in dense matrix, i.e. basic R-matrix
print("Training lightgbm with Matrix")
bst <- lightgbm(
data = as.matrix(train$data)
, label = train$label
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, objective = "binary"
)
### You can also put in lgb.Dataset object, which stores label, data and other meta datas needed for advanced features
print("Training lightgbm with lgb.Dataset")
dtrain <- lgb.Dataset(
data = train$data
, label = train$label
)
bst <- lightgbm(
data = dtrain
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, objective = "binary"
)
### Verbose = 0,1,2
print("Train lightgbm with verbose 0, no message")
bst <- lightgbm(
data = dtrain
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, objective = "binary"
, verbose = 0L
)
print("Train lightgbm with verbose 1, print evaluation metric")
bst <- lightgbm(
data = dtrain
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, nthread = 2L
, objective = "binary"
, verbose = 1L
)
print("Train lightgbm with verbose 2, also print information about tree")
bst <- lightgbm(
data = dtrain
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, nthread = 2L
, objective = "binary"
, verbose = 2L
)
### You can also specify data as file path to a LibSVM/TCV/CSV format input
### Since we do not have this file with us, the following line is just for illustration
### bst <- lightgbm(
### data = "agaricus.train.svm"
### , num_leaves = 4L
### , learning_rate = 1.0
### , nrounds = 2L
### , objective = "binary"
### )
#--------------------Basic prediction using lightgbm--------------
### You can do prediction using the following line
### You can put in Matrix, sparseMatrix, or lgb.Dataset
pred <- predict(bst, test$data)
err <- mean(as.numeric(pred > 0.5) != test$label)
print(paste("test-error=", err))
###--------------------Save and load models-------------------------
# Save model to binary local file
lgb.save(bst, "lightgbm.model")
### Load binary model to R
bst2 <- lgb.load("lightgbm.model")
pred2 <- predict(bst2, test$data)
### pred2 should be identical to pred
print(paste("sum(abs(pred2-pred))=", sum(abs(pred2 - pred))))
#--------------------Advanced features ---------------------------
### To use advanced features, we need to put data in lgb.Dataset
dtrain <- lgb.Dataset(data = train$data, label = train$label, free_raw_data = FALSE)
dtest <- lgb.Dataset.create.valid(dtrain, data = test$data, label = test$label)
#--------------------Using validation set-------------------------
### valids is a list of lgb.Dataset, each of them is tagged with name
valids <- list(train = dtrain, test = dtest)
### To train with valids, use lgb.train, which contains more advanced features
### valids allows us to monitor the evaluation result on all data in the list
print("Train lightgbm using lgb.train with valids")
bst <- lgb.train(
data = dtrain
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, valids = valids
, nthread = 2L
, objective = "binary"
)
### We can change evaluation metrics, or use multiple evaluation metrics
print("Train lightgbm using lgb.train with valids, watch logloss and error")
bst <- lgb.train(
data = dtrain
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, valids = valids
, eval = c("binary_error", "binary_logloss")
, nthread = 2L
, objective = "binary"
)
### lgb.Dataset can also be saved using lgb.Dataset.save
lgb.Dataset.save(dtrain, "dtrain.buffer")
### To load it in, simply call lgb.Dataset
dtrain2 <- lgb.Dataset("dtrain.buffer")
bst <- lgb.train(
data = dtrain2
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, valids = valids
, nthread = 2L
, objective = "binary"
)
### information can be extracted from lgb.Dataset using getinfo
label <- getinfo(dtest, "label")
pred <- predict(bst, test$data)
err <- as.numeric(sum(as.integer(pred > 0.5) != label)) / length(label)
print(paste("test-error=", err))`
at the end, it says:
'There is no applicable method for predict '(applied to an object of class "NULL") with "'predict' için uygulanabilir bir metod yok ("NULL" sınıfının bir nesnesine uygulanan)"***
After that I've tried in bank dataset;
library(lightgbm)
library(data.table)
data(bank, package = "lightgbm")
str(bank)
### We are dividing the dataset into two: one train, one validation
bank_train <- bank[1:4000, ]
bank_test <- bank[4001:4521, ]
head(bank_train)
head(bank_test)
bank_rules <- lgb.prepare_rules(data = bank_train)
bank_train <- bank_rules$data
bank_test <- lgb.prepare_rules(data = bank_test, rules = bank_rules$rules)$data
str(bank_test)
### Remove 1 to label because it must be between 0 and 1
bank_train$y <- bank_train$y - 1
bank_test$y <- bank_test$y - 1
### Data input to LightGBM must be a matrix, without the label
my_data_train <- as.matrix(bank_train[, 1:16, with = FALSE])
my_data_test <- as.matrix(bank_test[, 1:16, with = FALSE])
### Creating the LightGBM dataset with categorical features
### The categorical features can be passed to lgb.train to not copy and paste a lot
dtrain <- lgb.Dataset(data = my_data_train,
label = bank_train$y)
dtest <- lgb.Dataset(data = my_data_test,
label = bank_test$y)
# We can now train a model
model <- lgb.train(list(objective = "binary",
metric = "l2",
min_data = 1,
learning_rate = 0.1,
min_data = 0,
min_hessian = 1,
max_depth = 2,
categorical_feature = c(2, 3, 4, 5, 7, 8, 9, 11, 16)),
dtrain,
100,
valids = list(train = dtrain, valid = dtest))
### Try to find split_feature: 11
### If you find it, it means it used a categorical feature in the first tree
lgb.dump(model, num_iteration = 1)
### Remove 1 to label because it must be between 0 and 1
bank$y <- bank$y - 1
### Data input to LightGBM must be a matrix, without the label
my_data <- as.matrix(bank[, 1:16, with = FALSE])
### Creating the LightGBM dataset with categorical features
### The categorical features must be indexed like in R (1-indexed, not 0-indexed)
lgb_data <- lgb.Dataset(data = my_data,
#label = bank$y,
categorical_feature = c(2, 3, 4, 5, 7, 8, 9, 11, 16))
### We can now train a model
model <- lgb.train(list(objective = "binary",
metric = "l2",
min_data = 1,
learning_rate = 0.1,
min_data = 0,
min_hessian = 1,
max_depth = 2),
lgb_data,
100,
valids = list(train = lgb_data))
### Try to find split_feature: 2
### If you find it, it means it used a categorical feature in the first tree
lgb.dump(model, num_iteration = 1)
And it gives me a error such that:
Error in lgb.dump(model, num_iteration = 1) : lgb.save: booster should be an ‘lgb.Booster’
Also, it's model information is
Type: NULL
Length: 0
Size: 0 B
Value: NULL (empty)***
I'll try to reproduce this tonight and let you know what I find! Thanks again for using LightGBM
and reporting the issue.
Sorry for the delay! I was able to test out this code today. All details of my investigation are given below.
For the first block of code using agaricus
: I was not able to reproduce the issue you are seeing. Please try re-running this code from a clean R session..
For the second block of using bank
: The code beginning with # Try to find split_feature: 11
is incorrect. It is passing a data.table
that still has strings into lgb.Dataset()
, which expects only numeric input. That is why you see warnings like "NAs introduced by coercion".
It looks like you copied most of the bank
code from the categorical features demo in our documentation, but missed an important line:
bank <- lgb.prepare(data = bank)
That code takes in a data.table
/ data.frame
that could have character or factor columns, and makes sure that you get back one with only numeric columns.
Please let me know if this solves the issue. If it does not. please try to reproduce the problems you're seeing with a smaller amount of code, in a new R session.
Actually, I want to use lightgbm in RStudio, so I have RStudio 3.6.3
I assumed that this means you're using R
versions 3.6.3. RStudio
's most recent version is 1.3.959.
My environment for testing the code you provided:
R
3.6.3CMake
3.16.5LightGBM
cloned from master
as of https://github.com/microsoft/LightGBM/commit/ce95d9ca8aab8fd007a2682d4cc80157b4e1b801Installation Results
To install, I ran
git clone --recursive https://github.com/microsoft/LightGBM
cd LightGBM
Rscript build_r.R
Installation succeeded.
agaricus
I ran all of the following code exactly in order, in a single new R session.
library(lightgbm)
library(methods)
### In this example, we are aiming to predict whether a mushroom is edible
data(agaricus.train, package = "lightgbm")
data(agaricus.test, package = "lightgbm")
train <- agaricus.train
test <- agaricus.test
### The loaded data is stored in sparseMatrix, and label is a numeric vector in {0,1}
class(train$label)
class(train$data)
#--------------------Basic Training using lightgbm----------------
### This is the basic usage of lightgbm you can put matrix in data field
### Note: we are putting in sparse matrix here, lightgbm naturally handles sparse input
### Use sparse matrix when your feature is sparse (e.g. when you are using one-hot encoding vector)
print("Training lightgbm with sparseMatrix")
bst <- lightgbm(
data = train$data
, label = train$label
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, objective = "binary"
)
This succeeded.
### Alternatively, you can put in dense matrix, i.e. basic R-matrix
print("Training lightgbm with Matrix")
bst <- lightgbm(
data = as.matrix(train$data)
, label = train$label
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, objective = "binary"
)
This also succeeded.
### You can also put in lgb.Dataset object, which stores label, data and other meta datas needed for advanced features
print("Training lightgbm with lgb.Dataset")
dtrain <- lgb.Dataset(
data = train$data
, label = train$label
)
bst <- lightgbm(
data = dtrain
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, objective = "binary"
)
Succeeded.
### Verbose = 0,1,2
print("Train lightgbm with verbose 0, no message")
bst <- lightgbm(
data = dtrain
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, objective = "binary"
, verbose = 0L
)
print("Train lightgbm with verbose 1, print evaluation metric")
bst <- lightgbm(
data = dtrain
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, nthread = 2L
, objective = "binary"
, verbose = 1L
)
print("Train lightgbm with verbose 2, also print information about tree")
bst <- lightgbm(
data = dtrain
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, nthread = 2L
, objective = "binary"
, verbose = 2L
)
Succeeded.
#--------------------Basic prediction using lightgbm--------------
### You can do prediction using the following line
### You can put in Matrix, sparseMatrix, or lgb.Dataset
pred <- predict(bst, test$data)
err <- mean(as.numeric(pred > 0.5) != test$label)
print(paste("test-error=", err))
Succeeded
###--------------------Save and load models-------------------------
# Save model to binary local file
lgb.save(bst, "lightgbm.model")
### Load binary model to R
bst2 <- lgb.load("lightgbm.model")
pred2 <- predict(bst2, test$data)
### pred2 should be identical to pred
print(paste("sum(abs(pred2-pred))=", sum(abs(pred2 - pred))))
Succeeded.
#--------------------Advanced features ---------------------------
### To use advanced features, we need to put data in lgb.Dataset
dtrain <- lgb.Dataset(data = train$data, label = train$label, free_raw_data = FALSE)
dtest <- lgb.Dataset.create.valid(dtrain, data = test$data, label = test$label)
#--------------------Using validation set-------------------------
### valids is a list of lgb.Dataset, each of them is tagged with name
valids <- list(train = dtrain, test = dtest)
### To train with valids, use lgb.train, which contains more advanced features
### valids allows us to monitor the evaluation result on all data in the list
print("Train lightgbm using lgb.train with valids")
bst <- lgb.train(
data = dtrain
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, valids = valids
, nthread = 2L
, objective = "binary"
)
### We can change evaluation metrics, or use multiple evaluation metrics
print("Train lightgbm using lgb.train with valids, watch logloss and error")
bst <- lgb.train(
data = dtrain
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, valids = valids
, eval = c("binary_error", "binary_logloss")
, nthread = 2L
, objective = "binary"
)
Succeeded
### lgb.Dataset can also be saved using lgb.Dataset.save
lgb.Dataset.save(dtrain, "dtrain.buffer")
### To load it in, simply call lgb.Dataset
dtrain2 <- lgb.Dataset("dtrain.buffer")
bst <- lgb.train(
data = dtrain2
, num_leaves = 4L
, learning_rate = 1.0
, nrounds = 2L
, valids = valids
, nthread = 2L
, objective = "binary"
)
Succeeded.
label <- getinfo(dtest, "label")
pred <- predict(bst, test$data)
err <- as.numeric(sum(as.integer(pred > 0.5) != label)) / length(label)
print(paste("test-error=", err))
Succeeded.
bank
I ran all of the following code exactly in order, in a single new R session.
library(lightgbm)
library(data.table)
data(bank, package = "lightgbm")
str(bank)
Succeeded.
### We are dividing the dataset into two: one train, one validation
bank_train <- bank[1:4000, ]
bank_test <- bank[4001:4521, ]
head(bank_train)
head(bank_test)
bank_rules <- lgb.prepare_rules(data = bank_train)
bank_train <- bank_rules$data
bank_test <- lgb.prepare_rules(data = bank_test, rules = bank_rules$rules)$data
str(bank_test)
Succeeded.
### Remove 1 to label because it must be between 0 and 1
bank_train$y <- bank_train$y - 1
bank_test$y <- bank_test$y - 1
### Data input to LightGBM must be a matrix, without the label
my_data_train <- as.matrix(bank_train[, 1:16, with = FALSE])
my_data_test <- as.matrix(bank_test[, 1:16, with = FALSE])
### Creating the LightGBM dataset with categorical features
### The categorical features can be passed to lgb.train to not copy and paste a lot
dtrain <- lgb.Dataset(data = my_data_train,
label = bank_train$y)
dtest <- lgb.Dataset(data = my_data_test,
label = bank_test$y)
Succeeded. No logs produced.
# We can now train a model
model <- lgb.train(list(objective = "binary",
metric = "l2",
min_data = 1,
learning_rate = 0.1,
min_data = 0,
min_hessian = 1,
max_depth = 2,
categorical_feature = c(2, 3, 4, 5, 7, 8, 9, 11, 16)),
dtrain,
100,
valids = list(train = dtrain, valid = dtest))
### Try to find split_feature: 11
### If you find it, it means it used a categorical feature in the first tree
lgb.dump(model, num_iteration = 1)
### Remove 1 to label because it must be between 0 and 1
bank$y <- bank$y - 1
### Data input to LightGBM must be a matrix, without the label
my_data <- as.matrix(bank[, 1:16, with = FALSE])
### Creating the LightGBM dataset with categorical features
### The categorical features must be indexed like in R (1-indexed, not 0-indexed)
lgb_data <- lgb.Dataset(data = my_data,
#label = bank$y,
categorical_feature = c(2, 3, 4, 5, 7, 8, 9, 11, 16))
### We can now train a model
model <- lgb.train(list(objective = "binary",
metric = "l2",
min_data = 1,
learning_rate = 0.1,
min_data = 0,
min_hessian = 1,
max_depth = 2),
lgb_data,
100,
valids = list(train = lgb_data))
### Try to find split_feature: 2
### If you find it, it means it used a categorical feature in the first tree
lgb.dump(model, num_iteration = 1)
This failed, but with different issues than the one you reported. I think the problem is that bank
at this point in your code is not a numeric matrix...it is a dataframe with some character columns!
bank$y <- bank$y - 1 Error in bank$y - 1 : non-numeric argument to binary operator
Hi again,
I've tried codes for both agaricus and bank dataset according to your comments then it worked and I did not get any error messages.
Thank you
great!
This issue has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this.
Hi everyone,
I'm using lightgbm package in my Windows 10 computer, RStudio 3.6.3 version. I also asked how can I install lightgbm for R a few weeks ago then it was initiliazed successfuly (I guess)with help but when I run the given lightgbm data(bank dataset #887) it gives me a model; type: NULL length:0 size:0B value:NULL(empty). Also, I run lightgbm for my 10 years monthly univariate time series data, it also gives same output, empty. I don't know the reason for this issue, may it have been installed incorrectly or something went wrong?
Thank you