piettetech / PietteTech_DHT

DHT Sensor Library for Spark Core
Other
53 stars 60 forks source link

result = DHT.acquireAndWait(); will sometimes hang indefinitely #2

Closed owendelong closed 7 years ago

owendelong commented 9 years ago

`result = DHT.acquireAndWait();

May never return in some circumstances. In those circumstances, at least on Particle Photon, it still manages to pet the watchdog timer.

Suggested enhamcement: modify function prototype as follows: `uint16_t DHT.acquireAndWait(uint32_t timeout = 0);

The timeout parameter, if 0 would result in waiting indefinitely for a result as is current behavior.

If specified, then the a value would be returned either upon completion of the acquire, or, after timeout milliseconds, whichever occurs first. If the acquire is still in process, DHT_ACQUIRING should be returned. Otherwise, completion or an appropriate error should be returned.

owendelong commented 9 years ago

The pull request I just submitted doesn't fix this problem, but it does make it less burdensome in that the caller can specify a timeout. For backwards compatibility, the pull request code defaults to the current behavior. I think a more rational default would be 10 (or possibly even 5) seconds as these sensors generally take about 2 seconds to acquire and if they haven't done so by then, they probably aren't going to.

aguarino77 commented 9 years ago

I am encountering the same problem with the Particle Core and DHT22. Do you play around also with the delays inside the acquire function? I never considered changing from 1.5ms, but I am desperately looking for something that will avoid this annoying hanging of the board

if (_type == DHT11)
            delay(18);                  // DHT11 Spec: 18ms min
        else
            delayMicroseconds(1500);    // DHT22 Spec: 0.8-20ms, 1ms typ
owendelong commented 9 years ago

I didn't play with the innards of the acquire function. Adding a simple timeout solved the problem sufficiently for my needs. If that's likely to make your situation better (I've not seen it timeout more than once or twice in a row, ever, but I allow up to 5 retries), then you might want to try the code from the pull request I submitted. It doesn't break any existing functionality.

Not sure why the maintainer hasn't responded.

piettetech commented 9 years ago

Sorry for the absence. Thanks for your contribution. Is this a problem on the photon or the spark core? I can take a look this weekend.

Scott

On Wed, Aug 19, 2015 at 8:52 PM, owendelong notifications@github.com wrote:

I didn't play with the innards of the acquire function. Adding a simple timeout solved the problem sufficiently for my needs. If that's likely to make your situation better (I've not seen it timeout more than once or twice in a row, ever, but I allow up to 5 retries), then you might want to try the code from the pull request I submitted. It doesn't break any existing functionality.

Not sure why the maintainer hasn't responded.

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-132864796 .

aguarino77 commented 9 years ago

I am having it with the Core

Andrea

On 24 Aug 2015, at 16:58, Scott Piette notifications@github.com wrote:

Sorry for the absence. Thanks for your contribution. Is this a problem on the photon or the spark core? I can take a look this weekend.

Scott

On Wed, Aug 19, 2015 at 8:52 PM, owendelong notifications@github.com wrote:

I didn't play with the innards of the acquire function. Adding a simple timeout solved the problem sufficiently for my needs. If that's likely to make your situation better (I've not seen it timeout more than once or twice in a row, ever, but I allow up to 5 retries), then you might want to try the code from the pull request I submitted. It doesn't break any existing functionality.

Not sure why the maintainer hasn't responded.

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-132864796 .

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134234482.

piettetech commented 9 years ago

What pin are you using for the DHT? Also which DHT are you using?

On Mon, Aug 24, 2015 at 9:00 AM, aguarino77 notifications@github.com wrote:

I am having it with the Core

Andrea

On 24 Aug 2015, at 16:58, Scott Piette notifications@github.com wrote:

Sorry for the absence. Thanks for your contribution. Is this a problem on the photon or the spark core? I can take a look this weekend.

Scott

On Wed, Aug 19, 2015 at 8:52 PM, owendelong notifications@github.com wrote:

I didn't play with the innards of the acquire function. Adding a simple timeout solved the problem sufficiently for my needs. If that's likely to make your situation better (I've not seen it timeout more than once or twice in a row, ever, but I allow up to 5 retries), then you might want to try the code from the pull request I submitted. It doesn't break any existing functionality.

Not sure why the maintainer hasn't responded.

— Reply to this email directly or view it on GitHub < https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-132864796

.

— Reply to this email directly or view it on GitHub < https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134234482 .

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134234881 .

piettetech commented 9 years ago

Which pin are you using for the DHT and what version of the DHT are you using? I will try setting up a similar setup and run some testing. I have had this library running without any problems for the past 6 months without a single error.

On Mon, Aug 24, 2015 at 9:00 AM, aguarino77 notifications@github.com wrote:

I am having it with the Core

Andrea

On 24 Aug 2015, at 16:58, Scott Piette notifications@github.com wrote:

Sorry for the absence. Thanks for your contribution. Is this a problem on the photon or the spark core? I can take a look this weekend.

Scott

On Wed, Aug 19, 2015 at 8:52 PM, owendelong notifications@github.com wrote:

I didn't play with the innards of the acquire function. Adding a simple timeout solved the problem sufficiently for my needs. If that's likely to make your situation better (I've not seen it timeout more than once or twice in a row, ever, but I allow up to 5 retries), then you might want to try the code from the pull request I submitted. It doesn't break any existing functionality.

Not sure why the maintainer hasn't responded.

— Reply to this email directly or view it on GitHub < https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-132864796

.

— Reply to this email directly or view it on GitHub < https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134234482 .

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134234881 .

aguarino77 commented 9 years ago

Using D4 for communication, DHT22. I have intermittent problems. The library can work w/o problems maybe for one month or so, then suddenly hangs and I need to restart the Core. I plan to include the timeout correction of owndelong to see if it improves…

On 24 Aug 2015, at 17:45, Scott Piette notifications@github.com wrote:

Which pin are you using for the DHT and what version of the DHT are you using? I will try setting up a similar setup and run some testing. I have had this library running without any problems for the past 6 months without a single error.

On Mon, Aug 24, 2015 at 9:00 AM, aguarino77 notifications@github.com wrote:

I am having it with the Core

Andrea

On 24 Aug 2015, at 16:58, Scott Piette notifications@github.com wrote:

Sorry for the absence. Thanks for your contribution. Is this a problem on the photon or the spark core? I can take a look this weekend.

Scott

On Wed, Aug 19, 2015 at 8:52 PM, owendelong notifications@github.com wrote:

I didn't play with the innards of the acquire function. Adding a simple timeout solved the problem sufficiently for my needs. If that's likely to make your situation better (I've not seen it timeout more than once or twice in a row, ever, but I allow up to 5 retries), then you might want to try the code from the pull request I submitted. It doesn't break any existing functionality.

Not sure why the maintainer hasn't responded.

— Reply to this email directly or view it on GitHub < https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-132864796

.

— Reply to this email directly or view it on GitHub < https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134234482 .

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134234881 .

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134256545.

piettetech commented 9 years ago

There are many reasons the Spark can hang. I've had three devices running non-stop for 6 months with several sensors (BMP, DHT, DS18B20 and an LCD display). Without knowing more about your code I can't help much. What happens when it fails?

On Mon, Aug 24, 2015 at 9:47 AM, aguarino77 notifications@github.com wrote:

Using D4 for communication, DHT22. I have intermittent problems. The library can work w/o problems maybe for one month or so, then suddenly hangs and I need to restart the Core. I plan to include the timeout correction of owndelong to see if it improves…

On 24 Aug 2015, at 17:45, Scott Piette notifications@github.com wrote:

Which pin are you using for the DHT and what version of the DHT are you using? I will try setting up a similar setup and run some testing. I have had this library running without any problems for the past 6 months without a single error.

On Mon, Aug 24, 2015 at 9:00 AM, aguarino77 notifications@github.com wrote:

I am having it with the Core

Andrea

On 24 Aug 2015, at 16:58, Scott Piette notifications@github.com wrote:

Sorry for the absence. Thanks for your contribution. Is this a problem on the photon or the spark core? I can take a look this weekend.

Scott

On Wed, Aug 19, 2015 at 8:52 PM, owendelong < notifications@github.com> wrote:

I didn't play with the innards of the acquire function. Adding a simple timeout solved the problem sufficiently for my needs. If that's likely to make your situation better (I've not seen it timeout more than once or twice in a row, ever, but I allow up to 5 retries), then you might want to try the code from the pull request I submitted. It doesn't break any existing functionality.

Not sure why the maintainer hasn't responded.

— Reply to this email directly or view it on GitHub <

https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-132864796

.

— Reply to this email directly or view it on GitHub <

https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134234482

.

— Reply to this email directly or view it on GitHub < https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134234881

.

— Reply to this email directly or view it on GitHub < https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134256545 .

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134257082 .

aguarino77 commented 9 years ago

When it fails, it stops responding (timeout error) and there’s only the acquire routine in the loop, so it is likely never exiting. I do not blame your beautiful library, the sensor might be defective or so. But fact is that if something does not work as expected in the acquire, it simply freezes…

On 24 Aug 2015, at 18:04, Scott Piette notifications@github.com wrote:

There are many reasons the Spark can hang. I've had three devices running non-stop for 6 months with several sensors (BMP, DHT, DS18B20 and an LCD display). Without knowing more about your code I can't help much. What happens when it fails?

On Mon, Aug 24, 2015 at 9:47 AM, aguarino77 notifications@github.com wrote:

Using D4 for communication, DHT22. I have intermittent problems. The library can work w/o problems maybe for one month or so, then suddenly hangs and I need to restart the Core. I plan to include the timeout correction of owndelong to see if it improves…

On 24 Aug 2015, at 17:45, Scott Piette notifications@github.com wrote:

Which pin are you using for the DHT and what version of the DHT are you using? I will try setting up a similar setup and run some testing. I have had this library running without any problems for the past 6 months without a single error.

On Mon, Aug 24, 2015 at 9:00 AM, aguarino77 notifications@github.com wrote:

I am having it with the Core

Andrea

On 24 Aug 2015, at 16:58, Scott Piette notifications@github.com wrote:

Sorry for the absence. Thanks for your contribution. Is this a problem on the photon or the spark core? I can take a look this weekend.

Scott

On Wed, Aug 19, 2015 at 8:52 PM, owendelong < notifications@github.com> wrote:

I didn't play with the innards of the acquire function. Adding a simple timeout solved the problem sufficiently for my needs. If that's likely to make your situation better (I've not seen it timeout more than once or twice in a row, ever, but I allow up to 5 retries), then you might want to try the code from the pull request I submitted. It doesn't break any existing functionality.

Not sure why the maintainer hasn't responded.

— Reply to this email directly or view it on GitHub <

https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-132864796

.

— Reply to this email directly or view it on GitHub <

https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134234482

.

— Reply to this email directly or view it on GitHub < https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134234881

.

— Reply to this email directly or view it on GitHub < https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134256545 .

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134257082 .

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134265319.

owendelong commented 9 years ago

I don’t have any cores. I only tested it on the Photon.

I still think adding the timeout feature (I submitted a pull request) will be useful in any case because it allows one to make the function deterministic without significant overhead and the default call (no arguments) still results in the original behavior so it is 100% backwards compatible with the existing library.

Thanks,

Owen

On Aug 24, 2015, at 07:58 , Scott Piette notifications@github.com wrote:

Sorry for the absence. Thanks for your contribution. Is this a problem on the photon or the spark core? I can take a look this weekend.

Scott

On Wed, Aug 19, 2015 at 8:52 PM, owendelong notifications@github.com wrote:

I didn't play with the innards of the acquire function. Adding a simple timeout solved the problem sufficiently for my needs. If that's likely to make your situation better (I've not seen it timeout more than once or twice in a row, ever, but I allow up to 5 retries), then you might want to try the code from the pull request I submitted. It doesn't break any existing functionality.

Not sure why the maintainer hasn't responded.

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-132864796 .

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134234482.

owendelong commented 9 years ago

I’m using DHT22 getting power from Pin 5 (OUTPUT, HIGH to power up, then delay to settle, then reading) with Pin 6 for data.

Owen

On Aug 24, 2015, at 08:09 , Scott Piette notifications@github.com wrote:

What pin are you using for the DHT? Also which DHT are you using?

On Mon, Aug 24, 2015 at 9:00 AM, aguarino77 notifications@github.com wrote:

I am having it with the Core

Andrea

On 24 Aug 2015, at 16:58, Scott Piette notifications@github.com wrote:

Sorry for the absence. Thanks for your contribution. Is this a problem on the photon or the spark core? I can take a look this weekend.

Scott

On Wed, Aug 19, 2015 at 8:52 PM, owendelong notifications@github.com wrote:

I didn't play with the innards of the acquire function. Adding a simple timeout solved the problem sufficiently for my needs. If that's likely to make your situation better (I've not seen it timeout more than once or twice in a row, ever, but I allow up to 5 retries), then you might want to try the code from the pull request I submitted. It doesn't break any existing functionality.

Not sure why the maintainer hasn't responded.

— Reply to this email directly or view it on GitHub < https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-132864796

.

— Reply to this email directly or view it on GitHub < https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134234482 .

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134234881 .

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-134240107.

owendelong commented 9 years ago

Scott,

I did some debugging with this using Serial port and I can definitely say that acquire_and_wait was never returning when I was having these hangs.

For your convenience, I’m including my code here so that you can (perhaps) reproduce the problem in your testing. I was a hang every few days, sometimes more than once per day before I added the timeout.

Note, my code is written against the library with the timeout capability, but you can simply take the timeout specification out of the call to acquire_and_wait and it will still hang as before.

This code is set up for the Particle Dev environment, not the https://build.particle.io environment.

The code is a bit sloppy and I’m in the process of building a cleaner more elegant codebase now that I have something that works, but I’m not a professional programmer, just a hobbyist.

Owen

// This #include statement was automatically added by the Spark IDE.

include "application.h"

// This #include statement was automatically added by the Spark IDE.

include "HttpClient.h"

// This #include statement was automatically added by the Spark IDE.

include "PietteTech_DHT.h"

// Library required ISR wrapper declaration void dht_wrapper(); // must be declared before the lib initialization

define DHTPOWER 5

define DHTPIN 6

define DHTTYPE DHT22

define SERVER "www.delong.com"

define PORT 80

define CGI_PATH "/cgi-bin/post_temp.cgi"

char MY_ID[40] = "Invalid_Photon";

define MY_VER "0.02by" //Version Number String

define CYCLE_DURATION 60 // Poll at 60 second intervals

define LOGGING

SYSTEM_MODE(MANUAL);

// TODO: Migrate Serial.print statements to SYSLOG functionality HttpClient http;

http_headert headers[] = { { "Accept", "/_" }, { NULL, NULL }, };

const char *HAS_EXT_ANT[] = { "23002f000847343337373738", "3c002e000347343337373738", NULL, };

uint32_t next_time = millis() + (CYCLE_DURATION * 1000) - 1000;

http_request_t request = {0}; http_response_t response;

void dht_wrapper() { DHT.isrCallback(); }

bool is_external(const char *MY_ID) { uint8_t i; for(i=0; HAS_EXT_ANT[i] != NULL; i++) { if(strncmp(MY_ID, HAS_EXT_ANT[i], 40) == 0) return true; } return false; }

void setup() { Serial.begin(115200); Serial.println("Launching"); pinMode(7, OUTPUT); digitalWrite(7, LOW); (System.deviceID()).toCharArray(MY_ID, 40); Serial.print("My ID is "); Serial.println(MY_ID); if (is_external(MY_ID)) { WiFi.selectAntenna(ANT_EXTERNAL); }

}

void loop() { int result; float tC; float hum; char tbuf[10]; char hbuf[10]; char Query[1024]; char logbuf[80]; uint32_t startup = millis(); uint8_t i;

Serial.print("Starting connection process"); Spark.connect(); Serial.println("."); Serial.println("Application>\tGathering Data."); // Gather the data pinMode(DHTPOWER, OUTPUT); digitalWrite(DHTPOWER, HIGH); if (Spark.connected()) { digitalWrite(7, HIGH); Spark.process(); } delay(1000); // Let probe stabilize after powerup. for(i=0; i<5; i++) { if (i > 0) { Serial.println("Retrying"); } // Retry up to 5 times result = DHT.acquireAndWait(5000); if (result == DHTLIB_ERROR_ACQUIRING) { System.sleep(SLEEP_MODE_DEEP, 2); // Reset system and try again in 2 seconds. delay(2000); } startup = millis(); if (result == DHTLIB_OK) break; // Got a valid sample. Serial.print("Failed result: "); Serial.println(result); } Serial.println("\nApplication>\tData acquisition complete."); if (Spark.connected()) { digitalWrite(7, HIGH); Spark.process(); } Serial.print("Application>\tChecking result ("); Serial.print(result); Serial.println(")."); switch(result) { case DHTLIB_OK: hum = DHT.getHumidity(); tC = DHT.getCelsius(); Serial.print("Application>\t\tHumidity: "); Serial.println(hum); Serial.print("Application>\t\tTemperature: "); Serial.println(tC); sprintf(logbuf, "DHT_OK"); break;

  case DHTLIB_ERROR_CHECKSUM:
      Serial.println("Error\tChecksum error");
      sprintf(logbuf, "Error\tChecksum error");
      break;

  case DHTLIB_ERROR_ISR_TIMEOUT:
      Serial.println("Error\n\r\tISR time out error");
      sprintf(logbuf, "Error\tISR time out error");
      break;

  case DHTLIB_ERROR_RESPONSE_TIMEOUT:
      Serial.println("Error\n\r\tResponse time out error");
      sprintf(logbuf, "Error\tResponse time out error");
      break;

  case DHTLIB_ERROR_DATA_TIMEOUT:
      Serial.println("Error\n\r\tData time out error");
      sprintf(logbuf, "Error\tData time out error");
      break;

  case DHTLIB_ERROR_ACQUIRING:
      Serial.println("Error\n\r\tAcquiring");
      sprintf(logbuf, "Error\tAcquiring");
      break;

  case DHTLIB_ERROR_DELTA:
      Serial.println("Error\n\r\tDelta time to small");
      sprintf(logbuf, "Error\tDelta time to small");
      break;

  case DHTLIB_ERROR_NOTSTARTED:
      Serial.println("Error\n\r\tNot started");
      sprintf(logbuf, "Error\tNot started");
      break;

  default:
      Serial.println("Unknown error");
      sprintf(logbuf, "Unknown error");
      break;

} // Report the data Serial.println("Application>\tPerparing Query"); request.hostname=SERVER; request.port=PORT; String(tC).toCharArray(tbuf, 6); String(hum).toCharArray(hbuf, 6); if (result == DHTLIB_OK) { sprintf(Query, "%s?my_id=%s&ver=%s&ant=%s&sensor_unit_1=%s&temp_1=%s&hum_1=%s&logmsg=%s&next_time=%ld&current_time=%ld", CGI_PATH, MY_ID, MY_VER, is_external(MY_ID) ? "ext" : "int", MY_ID, tbuf, hbuf, logbuf, next_time, millis()); } else { sprintf(Query, "%s?my_id=%s&ver=%s&logmsg=%s&next_time=%ld&current_time=%ld", CGI_PATH, MY_ID, MY_VER, logbuf, next_time/4, millis()); } request.path = Query; // Make sure WIFI is up. for(result=0; millis() < startup+30000 && !Spark.connected(); result++) { Serial.print("."); delay(1000); } digitalWrite(7, Spark.connected() ? HIGH : LOW); // delay(5000); if (!Spark.connected()) { RGB.control(true); RGB.color(255,0,0); RGB.brightness(255); Serial.println("Not connected"); delay(250); next_time -= millis(); next_time /= 1000; if(next_time < 5 || next_time > CYCLE_DURATION) next_time = 5; System.sleep(SLEEP_MODE_DEEP, next_time); // Connect failed. Sleep 1/4 of normal time and retry. delay(1000); } // Send the request Serial.println("Application>\tSending Query"); Serial.print("\t\t\t\t"); Serial.println(request.hostname); Serial.print("\t\t\t\t"); Serial.println(request.port); Serial.print("\t\t\t\t"); Serial.println(request.path); http.get(request, response, headers); Serial.print("Application>\tQuery Returned "); Serial.println(response.status); if (response.status < 0) { RGB.control(true); RGB.color(255,0,0); RGB.brightness(255); Spark.disconnect(); // Clearly Spark.connect() didn't succeed, even if it thinks it did. delay(2000); WiFi.connect(); delay(2000); if (WiFi.ready()) { RGB.color(255,255,0); // Use Yellow to indicate not connected to cloud but WiFi OK } // Retry up to 5 times. for(result = 0; result < 5; result++) { startup = millis(); while(millis() < startup+10000 && !WiFi.ready()) { delay(250); Spark.process(); digitalWrite(7, (millis() & 0x100) ? HIGH : LOW); } Serial.println("Application>\tSending Query"); request.hostname=""; request.ip={192,159,10,7}; http.get(request, response, headers); if (response.status < 0) { digitalWrite(7, LOW); if (result % 2) { RGB.control(false); Spark.disconnect(); delay(2000); Spark.connect(); delay(2000); } else { RGB.control(true); RGB.color(255,255,255); RGB.brightness(255); WiFi.disconnect(); delay(2000); RGB.color(255,0,0); WiFi.connect(); delay(2000); } if (Spark.connected()) { digitalWrite(7, HIGH); } else if (WiFi.ready()) { RGB.color(255,255,0); // Use Yellow to indicate not connected to cloud Spark.connect(); delay(2000); if (Spark.connected()) { RGB.control(false); digitalWrite(7, HIGH); } } else { RGB.color(255,0,0); // WiFi connection failed too. } delay(result * 100); Serial.print("."); Serial.println("Retrying..."); continue; } Serial.println("Finished..."); if (response.status < 0) { Serial.println("Timed out without connecting successfully."); } else if (response.status == 200) { Serial.println("Connection succeeded."); } else { Serial.print("Unknown result: "); Serial.println(response.status); } break; // Connected, don't retry again, even if result is not 200. } } if (response.status != 200) { analogWrite(7, 255); delay(100); analogWrite(7, 0); delay(100); analogWrite(7, 255); delay(100); analogWrite(7, 128); if (response.status < 0) // TCP Connect failed -- RED { Serial.println("RED"); RGB.color(255,0,0); } else // Some other HTTP Response code -- BLUE { Serial.println("BLUE"); RGB.color(0,0,255); } result = millis(); while (result + 60000 < millis()) // Show error for 60 seconds { RGB.brightness(millis() % 256); Spark.process(); delay(5); } } RGB.control(false); Spark.process(); Serial.print("Application>\tResponse status: "); Serial.println(response.status); Serial.print("Application>\tHTTP Response Body: "); Serial.println(response.body); startup=millis(); Serial.print("30 second processing loop: "); while (millis() < startup+30000 && millis() > startup) // Terminate on millis() counter wrap { // Wait 15 seconds for queued update Spark.process(); delay(100);

} Serial.println("Completed."); // Sleep until next report period Serial.println("Sleeping."); delay(250); next_time -= millis(); next_time /= 1000; if (next_time < 5 || next_time > CYCLE_DURATION) next_time = 5; System.sleep(SLEEP_MODE_DEEP, next_time); delay(2000);

}

gusgonnet commented 9 years ago

Hi Owen, when you say "The pull request I just submitted doesn't fix this problem, but it does make it less burdensome in that the caller can specify a timeout" do you mean that your pull request fixes the hung issue, but does not fix the acquiring issue? if I understand correctly, with your fix in place and specifying a timeout, you end up with a core or photon that does not hung but from time to time will not acquire properly (and abort due to the timeout).

is my understanding correct? Thank you both, Owen and Scott! PS: I am sometimes experimenting this issue that my particle core hangs and I am using a DHT22 on pin 4.

owendelong commented 9 years ago

I mean that I did not diagnose or resolve whatever underlying situation causes acquire_and_wait() to never return. I merely created the ability to specify a limit as to how long can elapse before acquire_and_wait() will return regardless of the acquisition state.

With my fix in place AND specifying a timeout, you can guarantee that acquire_and_wait will always return after no longer than the specified amount of time. However, if you call acquire_and_wait() without a timeout, an infinite hang is still possible. In theory, if the library is functioning completely correctly, that should not be possible. There should always be either a result or an error.

My pull request forces the issue by returning an error if an acquisition isn't successful within a specified amount of time. It doesn't correct whatever issue in the existing library leads to the hang in the first place.

owendelong commented 9 years ago

Scott... Were you able to get anywhere with it over the weekend? Could you please consider integrating my pull request as an interim workaround? It's a pretty small change to the code base, but I am kind of stuck doing my particle development in the offline environment until this is resolved and I'd really like to be able to use the online environment due to the differences in library machinations between the two.

gusgonnet commented 9 years ago

Thank you Owen for your explanation.

on another note, why do you say you are stuck with the offline env? in my understanding, you could copy the library files to your online project and build it without referencing the library directly, I guess like the copy-paste you made in your offline environment but online, no? good luck Gustavo.

owendelong commented 9 years ago

I suppose I could do that, but I'd like to keep libraries as libraries rather than hand-weave a library into my code to work around a bug in the library.

gusgonnet commented 8 years ago

Hi @piettetech , is it in your plans to add this timeout proposed by @owendelong to your library? I have observed benefits from having a timeout myself. if not in the plans, do you mind if I fork your wonderful library, add the timeout, then publish it in the particle Lib? --> All original credits will go to you, of course. thank you, Gustavo.

owendelong commented 8 years ago

Gustavo, you don't need to write your own. You can use my fork which is in the pull request. It already fully implements the pull request.

I suppose since the pull request seems to be languishing, I should submit to particle.

Owen

On Feb 2, 2016, at 04:41, Gustavo notifications@github.com wrote:

Hi @piettetech , is it in your plans to add this timeout proposed by @owendelong to your library? I have observed benefits from having a timeout myself. if not in the plans, do you mind if I fork your wonderful library, add the timeout, then publish it in the particle Lib? --> All original credits will go to you, of course. thank you, Gustavo.

— Reply to this email directly or view it on GitHub.

piettetech commented 8 years ago

Sorry guys, I have been terrible and completely distracted.

Owen, I will look at merging in your pull request and re-publishing.

Scott

On Tue, Feb 2, 2016 at 6:17 AM, owendelong notifications@github.com wrote:

Gustavo, you don't need to write your own. You can use my fork which is in the pull request. It already fully implements the pull request.

I suppose since the pull request seems to be languishing, I should submit to particle.

Owen

On Feb 2, 2016, at 04:41, Gustavo notifications@github.com wrote:

Hi @piettetech , is it in your plans to add this timeout proposed by @owendelong to your library? I have observed benefits from having a timeout myself. if not in the plans, do you mind if I fork your wonderful library, add the timeout, then publish it in the particle Lib? --> All original credits will go to you, of course. thank you, Gustavo.

— Reply to this email directly or view it on GitHub.

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-178571261 .

owendelong commented 8 years ago

Thanks!!

Owen

On Feb 2, 2016, at 15:00 , Scott Piette notifications@github.com wrote:

Sorry guys, I have been terrible and completely distracted.

Owen, I will look at merging in your pull request and re-publishing.

Scott

On Tue, Feb 2, 2016 at 6:17 AM, owendelong notifications@github.com wrote:

Gustavo, you don't need to write your own. You can use my fork which is in the pull request. It already fully implements the pull request.

I suppose since the pull request seems to be languishing, I should submit to particle.

Owen

On Feb 2, 2016, at 04:41, Gustavo notifications@github.com wrote:

Hi @piettetech , is it in your plans to add this timeout proposed by @owendelong to your library? I have observed benefits from having a timeout myself. if not in the plans, do you mind if I fork your wonderful library, add the timeout, then publish it in the particle Lib? --> All original credits will go to you, of course. thank you, Gustavo.

— Reply to this email directly or view it on GitHub.

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-178571261 .

— Reply to this email directly or view it on GitHub https://github.com/piettetech/PietteTech_DHT/issues/2#issuecomment-178875006.

gusgonnet commented 8 years ago

that would be awesome, thank you

gusgonnet commented 8 years ago

thank you Scott for pushing this forward Gustavo.