Open Shinigami95 opened 4 years ago
Thanks for the report! Some questions to help you to debug the issue:
show dbs
in mongo shellshow collections
for the STH-related databases in mongo shellI'm running it with docker. The versión is the latest image of docker which i suppose it should be the latest of this repository.
The config.js is set as follows (we changed the direction to the database, and the default service and service path):
node@7aa616a01de8:/opt/sth$ cat config.js
/*
* Copyright 2015 Telefónica Investigación y Desarrollo, S.A.U
*
* This file is part of the Short Time Historic (STH) component
*
* STH is free software: you can redistribute it and/or
* modify it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the License,
* or (at your option) any later version.
*
* STH is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
* See the GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public
* License along with STH.
* If not, see http://www.gnu.org/licenses/.
*
* For those usages not covered by the GNU Affero General Public License
* please contact with: [german.torodelvalle@telefonica.com]
*/
var config = {};
// STH server configuration
//--------------------------
config.server = {
// The host where the STH server will be started.
// Default value: "localhost".
host: '0.0.0.0',
// host: 'localhost',
// The port where the STH server will be listening.
// Default value: "8666".
port: '8666',
// The service to be used if not sent by the Orion Context Broker in the notifications.
// Default value: "testservice".
defaultService: 'entornoc3',
// The service path to be used if not sent by the Orion Context Broker in the notifications.
// Default value: "/testservicepath".
defaultServicePath: '/pruebac3',
// A flag indicating if the empty results should be removed from the response.
// Default value: "true".
filterOutEmpty: 'true',
// Array of resolutions the STH component should aggregate values for.
// Valid resolution values are: 'month', 'day', 'hour', 'minute' and 'second'
aggregationBy: ['day', 'hour', 'minute'],
// Directory where temporary files will be stored, such as the ones generated when CSV files are requested.
// Default value: "temp".
temporalDir: 'temp',
// Max page size returned by a query
maxPageSize: '100'
};
// Cors Configuration
config.cors = {
// The enabled is use to set CORS policy
enabled: 'false',
options: {
origin: ['*'],
headers: [
'Access-Control-Allow-Origin',
'Access-Control-Allow-Headers',
'Access-Control-Request-Headers',
'Origin, Referer, User-Agent'
],
additionalHeaders: ['fiware-servicepath', 'fiware-service'],
credentials: 'true'
}
};
// Database configuration
//------------------------
config.database = {
// The STH component supports 3 alternative models when storing the raw and aggregated data
// into the database: 1) one collection per attribute, 2) one collection per entity and
// 3) one collection per service path. The possible values are: "collection-per-attribute",
// "collection-per-entity" and "collection-per-service-path" respectively. Default value:
// "collection-per-entity".
dataModel: 'collection-per-entity',
// The username to use for the database connection. Default value: "".
user: '',
// The password to use for the database connection. Default value: "".
password: '',
// The URI to use for the database connection. It supports replica set URIs. This does not
// include the "mongo://" protocol part. Default value: "localhost:27017"
URI: 'MongoURLandPort',
// The name of the replica set to connect to, if any. Default value: "".
replicaSet: '',
// The prefix to be added to the service for the creation of the databases. Default value: "sth".
prefix: 'sth_',
// The prefix to be added to the collections in the databases. More information below.
// Default value: "sth_".
collectionPrefix: 'sth_',
// The default MongoDB pool size of database connections. Optional. Default value: "5".
poolSize: '5',
// The write concern (see http://docs.mongodb.org/manual/core/write-concern/) to apply when
// writing data to the MongoDB database. Default value: "1".
writeConcern: '1',
// Flag indicating if the raw and/or aggregated data should be persisted. Valid values are:
// "only-raw", "only-aggregated" and "both". Default value: "both".
shouldStore: 'both',
truncation: {
// Data from the raw and aggregated data collections will be removed if older than the value specified in seconds.
// Set the value to 0 or remove the property entry not to apply this time-based truncation policy.
// Default value: "0".
expireAfterSeconds: '0',
// The oldest raw data (according to insertion time) will be removed if the size of the raw data collection
// gets bigger than the value specified in bytes. In case of raw data the reference time is the one stored in the
// 'recvTime' property whereas in the case of the aggregated data the reference of time is the one stored in the
// '_id.origin' property. Set the value to 0 or remove the property entry not to apply this truncation policy.
// Default value: "0".
// The "size" configuration parameter is mandatory in case size collection truncation is desired as required by
// MongoDB.
// Notice that this configuration parameter does not affect the aggregated data collections since MongoDB does not
// currently support updating documents in capped collections which increase the size of the documents.
// Notice also that in case of the raw data, the size-based truncation policy takes precedence over the TTL one.
// More concretely, if "size" is set, the value of "exporeAfterSeconds" is ignored for the raw data collections
// since currently MongoDB does not support TTL in capped collections.
size: '0',
// The oldest raw data (according to insertion time) will be removed if the number of documents in the raw data
// collections goes beyond the specified value. Set the value to 0 or remove the property entry not to apply this
// truncation policy. Default value: "0".
// Notice that this configuration parameter does not affect the aggregated data collections since MongoDB does not
// currently support updating documents in capped collections which increase the size of the documents.
max: '0'
},
// Attribute values to one or more blank spaces should be ignored and not processed either as raw data or for
// the aggregated computations. Default value: "true".
ignoreBlankSpaces: 'true',
// Database and collection names have to respect the limitations imposed by MongoDB (see
// https://docs.mongodb.com/manual/reference/limits/). To it, the STH provides 2 main mechanisms: mappings and
// encoding which can be configured using the next 2 configuration parameters.
// The mappings mechanism will substitute the original services, service paths, entity and attribute names and types
// by the ones defined in the configuration file. If enabled, the mappings mechanism will be the one applied.
nameMapping: {
// Default value: "true" (although we will set it to false until the Cygnus counterpart is ready and landed)
enabled: 'false',
// The path from the root of the STH component Node application to the mappings configuration file
configFile: './name-mapping.json'
},
// The encoding criteria is the following one:
// 1. Encode the forbidden characters using an escaping character (x) and a numerical Unicode code for each character.
// For instance, the / character will be encoded as x002f.
// 2. Database and collection names already using the above encoding must be escaped prepending another x,
// for instance, the text x002a will be encoded as xx002a.
// 3. The uppercase characters included in database names will be encoded using the mechanism stated in 1.
// 4. Collection names starting with 'system.' will be encoded as 'xsystem.'. For instance, system.myData will be
// encoded as xsystem.myData.
// Default value: "true" (although we will set it to false until the Cygnus counterpart is ready and landed)
nameEncoding: 'false'
};
// Logging configuration
//------------------------
config.logging = {
// The logging level of the messages. Messages with a level equal or superior to this will be logged.
// Accepted values are: "debug", "info", "warn" and "error". Default value: "info".
level: 'info',
// The logging format:
// - "json": writes logs as JSON.
// - "dev": for development. Used as the 'de-facto' value when the NODE_ENV variable is set to 'development'.
// - "pipe": writes logs separating the fields with pipes.
format: 'pipe',
// The time in seconds between proof of life logging messages informing that the server is up and running normally.
// Default value: "60"
proofOfLifeInterval: '60',
// The time in seconds between processed requests statistics appear in the logs
// Default value: "60"
processedRequestLogStatisticsInterval: '60'
};
module.exports = config;
The show databases at mongo shows this:
> show databases
admin 0.000GB
cep 0.000GB
config 0.000GB
local 0.000GB
orion 0.000GB
orion-entornoc3 0.000GB
sth_entornoc3 0.000GB
The show collections output is this:
> show tables
sth_/pruebac3_ParkingAccess-01_ParkingAccess.aggr
And yes, it has data inside:
db['sth_/pruebac3_ParkingAccess-01_ParkingAccess.aggr'].find()
{ "_id" : { "attrName" : "door", "origin" : ISODate("2020-02-26T08:09:00Z"), "resolution" : "second", "range" : "minute" }, "points" : [ { "offset" : 0, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 1, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 2, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 3, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 4, "samples" : 1, "sum" : 0, "sum2" : 0, "min" : 0, "max" : 5e-324 }, { "offset" : 5, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 6, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 7, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 8, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 9, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 10, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 11, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 12, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 13, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 14, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 15, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 16, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 17, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 18, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 19, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 20, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 21, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 22, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 23, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 24, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 25, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 26, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 27, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 28, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 29, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 30, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 31, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 32, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 33, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 34, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 35, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 36, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 37, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 38, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 39, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 40, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 41, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 42, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 43, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 44, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 45, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 46, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 47, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 48, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 49, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 50, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 51, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 52, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 53, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 54, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 55, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 56, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 57, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 58, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity }, { "offset" : 59, "samples" : 0, "sum" : 0, "sum2" : 0, "min" : Infinity, "max" : -Infinity } ], "attrType" : "number" }...
Thanks in advance.
Note that show collections command in only showing one colleciton (the one for aggregations):
> show tables
sth_/pruebac3_ParkingAccess-01_ParkingAccess.aggr
You should see there also a collection for raw data. The lack of that collection seems to be the cause of the Error when getting the raw data collection for retrieval (the collection 'null' may not exist)
you are getting.
So the question now is why that collection is not being populated. How do you feed the STH database? Which Cygnus sinks are you using?
In the last days we have experiencing this issue. Using last software of STH and Cygnus. Regards, Cesar Jorge
Hello; I'm using cygnus to persist data in STH. The data is correctly written in MongoDB but when I ask the API for information with some parameters I get the next error in the STH log:
Even having this message, the API returns an ok(200) response with empty values. This only happens when I use the parameters "hLimit=3&hOffset=0" or "lastN=1" using them as in the examples of the documentation.
When I use "aggrMethod=min&aggrPeriod=minute" with the same get petition without the other parameters it returns the data correctly and it doesn't show any errors at the log.
Can you tell me what might the problem?
Thanks in advance.