-
I am using pandas.read_sql function with hive connection to extract a really large data. I have a script like this:
df = pd.read_sql(query_big, hive_connection)
df2 = pd.read_sql(query_simple, hiv…
-
Right now PostgreSQL is called in GetFeature with a single request,
could be interresting to find a way to retrieve datas in several pieces,
and so to begin to send them back to client earlier.
At le…
-
We have a very big amount of data which take much time, some time connection has timeout, but I want to get the data of a specific date or limit data per request. How it would be possible. Please guid…
-
How do you plan on addressing growth issues? Are you using the latest Apache big data / cloud solution?
ghost updated
8 years ago
-
-
-
**Is your feature request related to a problem? Please describe.**
I'm always frustrated when I see the very few slogans over and over again.
**Describe the solution you'd like**
I want to see a …
-
### Is there an existing issue for this?
- [x] I have searched the existing issues
### Describe your proposed enhancement in detail.
I am using nilearn to mask a large nifti using the following com…
-
"Originally, we estimated it would take about 14 months to download the 600,000 Salmonella genomes in the SRA. Fortunately, another research group had assembled all the bacterial genomes (18), and we …
-
When I run the smooth script for bigger data(upsampled 2X ), it was killed. What is the maximum resolution that this tool can handle?