-
When the specified [cluster version](https://github.com/Azure/azure-rest-api-specs/blob/792db17291c758b2bfdbbc0d35d0e2f5b5a1bd05/specification/hdinsight/resource-manager/Microsoft.HDInsight/stable/202…
-
We are using Jupyter notebook installed on HDInsight spark cluster. While executing long running notebook, using FQDN URL, we are facing WebSocket connection failure and kernel disconnection
If we …
-
Hi folks!
I try to ingest many items on opentsdb using http api/put, but at some times the system is unable to process more http request (timeouts). If I restart the service, the system can now inges…
-
In the Task1:
I Copy the below text and paste it into the Jupyter notebook.
But it is error.
cloud you help me. how to solve it.
thank you
I use below code:
import sqlContext.implicits._…
-
We are having trouble getting this library to perform against a Table Storage collection that has about 2 million records in it. Each record is approximately 4KB.
For example, a simple SELECT LIMIT …
-
[201-hdinsight-datalake-store-azure-storage](https://github.com/Azure/azure-quickstart-templates/tree/master/201-hdinsight-datalake-store-azure-storage)
### Issue Details
When trying to create a…
-
Support for Azure Blob-Store - WASB Hadoop Filesystem
Hi,
From h2ostream:
we are using Azure Blobstore (wasb interface - part of Hadoop release 2.7.1 https://hadoop.apache.org/docs/stable/had…
-
As a result of this limitation, I have to create distinct partitions for all the dates plus it does not support high precision datetime datatypes. I ran into a similar limitation with HIVEQL on HDINSI…
ghost updated
6 years ago
-
Logging data to the hard drive of the analysis host is suitable for most cases, but when dealing with very large numbers of targets or when a web request is the only quick way to push results out of t…
-
I am having an issue with getting data from Azure HD Insight. The code is failing on processing any results, even the base query to get the tables. A row for the table query is coming back as:
{{…