Open tianjixuetu opened 1 year ago
why arrow c++ compute so slowly?
yes, all the code is in the repo, arrow c++ , pure python, pure c++ @mapleFU
Can you provide the correspond Python code?
And what is a.out here?
Also, pandas is not "pure python", depending on version, it might switch to arrow or other underlying implementions.
a.out is the compilation result of the pure c++, the pure c++ code is:
#include <iostream>
#include <fstream>
#include <sstream>
#include <vector>
#include <cmath>
#include <chrono>
// 计算均值
double Mean(const std::vector<double>& data) {
double sum = 0.0;
for (const double& value : data) {
sum += value;
}
return sum / data.size();
}
// 计算标准差
double StandardDeviation(const std::vector<double>& data) {
double mean = Mean(data);
double variance = 0.0;
for (const double& value : data) {
variance += std::pow(value - mean, 2);
}
return std::sqrt(variance / (data.size()-1));
}
int main() {
auto start_time = std::chrono::high_resolution_clock::now();
std::ifstream file("./fund_nav.csv");
if (!file.is_open()) {
std::cerr << "Failed to open the CSV file." << std::endl;
return 1;
}
std::vector<double> returns;
double previous_nav = 0.0;
std::string line;
getline(file, line); // Skip the header line
while (getline(file, line)) {
std::istringstream ss(line);
std::string date, nav_str,cum_nav_str;
getline(ss, date, ',');
getline(ss, nav_str, ',');
getline(ss, cum_nav_str, ',');
//std::cout << line << std::endl;
try {
double nav = std::stod(cum_nav_str);
if (previous_nav > 0.0) {
double daily_return = (nav - previous_nav) / previous_nav;
returns.push_back(daily_return);
}
previous_nav = nav;
} catch (const std::invalid_argument& e) {
std::cerr << "Invalid data found: " << nav_str << std::endl;
}
}
file.close();
if (returns.empty()) {
std::cerr << "No returns data found." << std::endl;
return 1;
}
// 计算均值和标准差
double mean_return = Mean(returns);
double std_deviation = StandardDeviation(returns);
// 计算夏普比率
double trading_days_per_year = 252.0;
double sqrt_trading_days_per_year = std::sqrt(trading_days_per_year);
double sharpe_ratio = (mean_return * sqrt_trading_days_per_year) / std_deviation;
// std::cout << "mean_return " << mean_return << std::endl;
// std::cout << "std_deviation " << std_deviation << std::endl;
// std::cout << "sqrt_trading_days_per_year " << sqrt_trading_days_per_year << std::endl;
std::cout << "the result of sharpe ratio : " << sharpe_ratio << std::endl;
auto end_time = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::microseconds>(end_time - start_time);
std::cout << "the consume time of pure c++ read data and caculate the sharpe_ratio: " << duration.count()/1000.0 << " ms" << std::endl;
return 0;
}
the python code is :
import pandas as pd
import empyrical as ep
import time
a = time.perf_counter()
data = pd.read_csv("./fund_nav.csv")
returns = data['复权净值'].pct_change().dropna()
sharpe_ratio = ep.sharpe_ratio(returns)
# print("mean_return ",returns.mean())
# print("std_deviation ",returns.std())
# print("sqrt_trading_days_per_year ",252**0.5)
print("the result of sharpe ratio : ", sharpe_ratio)
b = time.perf_counter()
print(f"the consume time of python read data and caculate the sharpe_ratio: {(b-a)*1000.0} ms")
Also, pandas is not "pure python", depending on version, it might switch to arrow or other underlying implementions.
you are right , the python code is not pure python code, it uses pandas and numpy.
Would code like below helps?
arrow::Datum avg_return;
arrow::Datum avg_std;
double daily_sharpe_ratio;
// 创建 Arrow Double 标量
double days_of_year_double = 252.0;
double sqrt_year = std::sqrt(days_of_year_double);
ARROW_ASSIGN_OR_RAISE(avg_return, arrow::compute::CallFunction(
"mean", {fund_returns}));
arrow::compute::VarianceOptions variance_options;
variance_options.ddof = 1;
ARROW_ASSIGN_OR_RAISE(avg_std, arrow::compute::CallFunction(
"stddev", {fund_returns},&variance_options));
daily_sharpe_ratio = avg_return.scalar_as<::arrow::DoubleScalar>().value / avg_std.scalar_as<::arrow::DoubleScalar>().value;
std::cout << "计算得到的夏普率为 : " << daily_sharpe_ratio / sqrt_year << std::endl;
auto end_time = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::microseconds>(end_time - start_time);
std::cout << "c++读取数据,然后计算夏普率一共耗费时间为: " << duration.count()/1000.0 << " ms" << std::endl;
return arrow::Status::OK();
thank you very much. but the fund_returns don't calculate from the fund_nav value
I didn't change the code in reading-from-csv part(though it also can be optimized, currently more than expected column has been read)
I didn't change the code in reading-from-csv part(though it also can be optimized, currently more than expected column has been read)
ok, thank you, however I think it is not a valid way.
Can you explain why this is not a valid way?
And what about the performance now?
Can you explain why this is not a valid way?
And what about the performance now?
the performance is just a little better than the precious, is also worse than python code. you just calculate the sharpe ratio use pure c++ ,else is same , so it is not a valid way. by the way, is this code generated by chatgpt?
Call Compute function using scalar is like volcano model in data base, it has the cost of:
The pure C++ code is a bit like the codegen
in system. You already know the type(though reading from file might suffer from non-optimal performance). So computing using raw-C++ with self defined type would be faster. You can achive some similar performance using some template to compute the logic directly.
So I don't think it's a good way if you can ensure the function call and know the input / output type. Also when I run benchmark localy, the performance mainly slower when:
So you may need to just benchmark the "compute time", rather than this. The initialize of arrow::compute might take some time.
Specificlly, you can:
auto registry = ::arrow::compute::GetFunctionRegistry();
// 计算收益率
auto start_time = std::chrono::high_resolution_clock::now();
by the way, is this code generated by chatgpt?
No.
Call Compute function using scalar is like volcano model in data base, it has the cost of:
- Find the function ( in dispatch )
- Detect the input type
- Compute -> this is the only logic we actually need
- Wrap the output function
The pure C++ code is a bit like the
codegen
in system. You already know the type(though reading from file might suffer from non-optimal performance). So computing using raw-C++ with self defined type would be faster. You can achive some similar performance using some template to compute the logic directly.So I don't think it's a good way if you can ensure the function call and know the input / output type. Also when I run benchmark localy, the performance mainly slower when:
- Setup the framework.
- Dispatch function
So you may need to just benchmark the "compute time", rather than this. The initialize of arrow::compute might take some time.
Specificlly, you can:
auto registry = ::arrow::compute::GetFunctionRegistry(); // 计算收益率 auto start_time = std::chrono::high_resolution_clock::now();
by the way, is this code generated by chatgpt?
No.
Your points make a lot of sense, but when it comes to Arrow as a standalone module providing computation capabilities, especially in C++, the performance is unexpectedly slower than Python code. This is somewhat unacceptable, and there is a significant need for improvement. Are you familiar with Arrow? Do you have specific methods to implement data reading and computation to achieve speeds close to C++?
IMO I don't think comparing like this would be fair. Since when initialize, arrow::compute
will register lots of functions. It will execute only once when problem is execution.
The py code might do initialize before calling the real compution. Have you tried:
auto registry = ::arrow::compute::GetFunctionRegistry();
auto start_time = std::chrono::high_resolution_clock::now();
Also, using compute with Scalar is ok but it's not adviced since you know the type yourself.
auto registry = ::arrow::compute::GetFunctionRegistry(); auto start_time = std::chrono::high_resolution_clock::now();
I just try it, but when I use this code, the speed fluctuates significantly, and there's no clear improvement @mapleFU
You can try to find how time spend in the remaining time. After pre-initialized these data, remaining time is decided by reading-csv, processing and some memcpy.
Since the workload is a memory bound and might introduce some threading. Unstable time may come from threading or initialization.
Also, I don't think runing a problem once is suitable to benchmark, maybe you can run multiple times or introduce google benchmark. In my machine, the main-time is from loading the Registry, and reading small csv file. And it takes about 0.08ms. So I don't know what occupies your execution time
well, i will give up arrow c++ now, using pure c++ to recode the empyrical. looking forwarding the arrow more effiently
You can try to find how time spend in the remaining time. After pre-initialized these data, remaining time is decided by reading-csv, processing and some memcpy.
Since the workload is a memory bound and might introduce some threading. Unstable time may come from threading or initialization.
Also, I don't think runing a problem once is suitable to benchmark, maybe you can run multiple times or introduce google benchmark. In my machine, the main-time is from loading the Registry, and reading small csv file. And it takes about 0.08ms. So I don't know what occupies your execution time
Describe the usage question you have. Please include as many useful details as possible.
I want to use arrow to recode some project, for example, empyrical, pyfolio and backtrader, the first target is reading csv data and compute sharpe ratio, I test three ways, however the arrow c++ is lowest.
I push my code and data to a repo which is learn_arrow
I am doubt that there is something wrong. is there any way to speed the arrow to read data and compute?
the arrow code is below, you can get all code and data from learn_arrow
Component(s)
C++