Open SamuelTJackson opened 4 years ago
Could you have a look to the /tmp/contextBroker.log file (inside the docker running CB)? Maybe it should some useful information that can help to understand the problem.
I updated my post!
The log seems to correspond to a clean Orion startup. So, let's check some other things...
In the case of a failing test, several additional files are produced. In particular, in the same directory of the failing <file>.test
file you will found:
<file>.contextBroker.log
<file>.diff
<file>.out
<file>.regexpect
<file>.shell
<file>.shellInit
<file>.shellInit.stderr
<file>.shellInit.stdout
<file>.shell.stderr
<file>.teardown
<file>.teardown.stderr
<file>.teardown.stdout
Let's have a look to the .stderr and .stdout ones. Could you enter inside the docker container and examine that files in order to check if they provide some clue?
(Test files are under test/functionalTest/cases from Orion's root)
cat */*.stderr:
0000_bad_requests/erroneous_input.shellInit: line 42: mongo: command not found
0000_bad_requests/pagination_error_in_uri_params.shellInit: line 42: mongo: command not found
0000_bad_requests/service_not_recognized.shellInit: line 42: mongo: command not found
0000_content_related_headers/accept_fail_01.shellInit: line 42: mongo: command not found
0000_content_related_headers/accept_fail.shellInit: line 42: mongo: command not found
0000_content_related_headers/in_out_formats.shellInit: line 42: mongo: command not found
0000_content_related_headers/missing_content_type_header.shellInit: line 42: mongo: command not found
0000_content_related_headers/zero_content_length.shellInit: line 42: mongo: command not found
0000_deprecated_checkings/geoquery_circle_deprecated.shellInit: line 42: mongo: command not found
0000_deprecated_checkings/get_entity_dates_with_options.shellInit: line 42: mongo: command not found
0000_deprecated_checkings/legacy_geolocalization_area_json.shellInit: line 42: mongo: command not found
0000_deprecated_checkings/legacy_geoquery_bad_coords.shellInit: line 42: mongo: command not found
0000_deprecated_checkings/legacy_geoquery_circle_json.shellInit: line 42: mongo: command not found
0000_deprecated_checkings/legacy_geoquery_polygon_json.shellInit: line 42: mongo: command not found
0000_deprecated_checkings/legacy_location_no_actual_location_change.shellInit: line 42: mongo: command not found
0000_deprecated_checkings/legacy_wgs84.shellInit: line 42: mongo: command not found
0000_deprecated_checkings/metadata_id_as_regular_metadata.shellInit: line 42: mongo: command not found
0000_deprecated_checkings/metadata_id_duplicate_error.shellInit: line 42: mongo: command not found
0000_deprecated_checkings/metadata_id_service_not_found_old_urls.shellInit: line 42: mongo: command not found
0000_deprecated_checkings/ontimeinterval_subs.shellInit: line 42: mongo: command not found
0000_deprecated_checkings/uppercase_action_types_in_ngsiv2_batch_update.shellInit: line 42: mongo: command not found
0000_deprecated_checkings/xml_support.shellInit: line 42: mongo: command not found
0000_https_support/https.shellInit: line 42: mongo: command not found
0000_ipv6_support/ipv4_ipv6_both.shellInit: line 42: mongo: command not found
0000_json_parse/empty_payloads.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParseBadVerbDiscoverContextAvailability.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParseBadVerbRegisterContext.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParseContextEntitiesByEntityId.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParseContextEntityAttributes.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParseEntityByIdAttributeByName.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParseGetNotifyContext.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParseIndividualContextEntity.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParsePostDiscoverContextAvailability.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParsePostNotifyContextAvailability.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParsePostNotifyContext.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParsePostQueryContext.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParsePostRegisterContextNoEntities.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParsePostRegisterContextNoEntityId.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParsePostRegisterContext.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParsePostRegisterProvider.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParsePostSubscribeContextAvailability.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParsePostUnsubscribeContextAvailability.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParsePostUnsubscribeContext.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParsePostUpdateContextAvailabilitySubscription.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParsePostUpdateContext.shellInit: line 42: mongo: command not found
0000_json_parse/jsonParsePostUpdateContextSubscription.shellInit: line 42: mongo: command not found
0000_json_parse/json_throttling.shellInit: line 42: mongo: command not found
0000_large_requests/large_discover_context_availability.shellInit: line 42: mongo: command not found
0000_large_requests/large_query_context.shellInit: line 42: mongo: command not found
0000_large_requests/large_register_context_request.shellInit: line 42: mongo: command not found
cat */*.stdout:
accumulator running as PID 4307
Unable to start listening application after waiting 30
accumulator running as PID 8525
Unable to start listening application after waiting 30
I see two issues here:
Regarding mongo: command not found
it is due you haven't installed the mongo shell on the docker image. Related with this, I see that you have Orion and Mongo in separate images, which is not the typical setup for running test (maybe it works adjusting some env var in testEnv.sh, but it hasn't been tested...). I'd recommend a setup similar to the one used in the CI image in the repository (have a look to https://github.com/telefonicaid/fiware-orion/blob/master/ci/rpm7) which installs Orion and Mongo in the same image.
Regarding Unable to start listening application after waiting 30
maybe it's due the accumulator-server.py is not properly installed. Maybe you need a make install_scripts
just after make install
.
I installed the mongo shell in the docker container. And I checked if accumulator-server.py is properly installed. I can run accumulator-server.py inside the container:
* Running on http://0.0.0.0:1028/
Now the first 28 test are working:
orion_test | 0001/1137: 0000_bad_requests/erroneous_input.test ....................................................................... 01 seconds
orion_test | 0003/1137: 0000_bad_requests/pagination_error_in_uri_params.test ........................................................ 01 seconds
orion_test | 0004/1137: 0000_bad_requests/service_not_recognized.test ................................................................ 01 seconds
orion_test | 0005/1137: 0000_cli/bool_option_with_value.test ......................................................................... 01 seconds
orion_test | 0006/1137: 0000_cli/command_line_options.test ........................................................................... 00 seconds
orion_test | 0007/1137: 0000_cli/tracelevel_without_logLevel_as_DEBUG.test ........................................................... 00 seconds
orion_test | 0008/1137: 0000_cli/version.test ........................................................................................ 00 seconds
orion_test | 0009/1137: 0000_content_related_headers/accept_fail.test ................................................................ 02 seconds
orion_test | 0010/1137: 0000_content_related_headers/accept_fail_01.test ............................................................. 01 seconds
orion_test | 0011/1137: 0000_content_related_headers/in_out_formats.test ............................................................. 01 seconds
orion_test | 0012/1137: 0000_content_related_headers/missing_content_type_header.test ................................................ 02 seconds
orion_test | 0013/1137: 0000_content_related_headers/zero_content_length.test ........................................................ 01 seconds
orion_test | 0014/1137: 0000_deprecated_checkings/geoquery_circle_deprecated.test .................................................... 03 seconds
orion_test | 0015/1137: 0000_deprecated_checkings/get_entity_dates_with_options.test ................................................. 04 seconds
orion_test | 0016/1137: 0000_deprecated_checkings/legacy_geolocalization_area_json.test .............................................. 01 seconds
orion_test | 0017/1137: 0000_deprecated_checkings/legacy_geoquery_bad_coords.test .................................................... 01 seconds
orion_test | 0018/1137: 0000_deprecated_checkings/legacy_geoquery_circle_json.test ................................................... 04 seconds
orion_test | 0019/1137: 0000_deprecated_checkings/legacy_geoquery_polygon_json.test .................................................. 05 seconds
orion_test | 0020/1137: 0000_deprecated_checkings/legacy_location_no_actual_location_change.test ..................................... 01 seconds
orion_test | 0021/1137: 0000_deprecated_checkings/legacy_wgs84.test .................................................................. 04 seconds
orion_test | 0022/1137: 0000_deprecated_checkings/metadata_id_as_regular_metadata.test ............................................... 01 seconds
orion_test | 0023/1137: 0000_deprecated_checkings/metadata_id_duplicate_error.test ................................................... 01 seconds
orion_test | 0024/1137: 0000_deprecated_checkings/metadata_id_service_not_found_old_urls.test ........................................ 02 seconds
orion_test | 0025/1137: 0000_deprecated_checkings/ontimeinterval_subs.test ........................................................... 01 seconds
orion_test | 0026/1137: 0000_deprecated_checkings/uppercase_action_types_in_ngsiv2_batch_update.test ................................. 01 seconds
orion_test | 0027/1137: 0000_deprecated_checkings/xml_support.test ................................................................... 01 seconds
orion_test | 0028/1137: 0000_https_support/https.test ................................................................................ 07 seconds
orion_test | 0029/1137: 0000_ipv6_support/ipv4_ipv6_both.test ........................................................................ (FAIL 11 - SHELL-INIT exited with code 1) testHarness.sh/IPv6 IPv4 Both: (0000_ipv6_support/ipv4_ipv6_both.test)
orion_test | make: *** [functional_test] Error 11
stderr is empty stdout:
{ "dropped" : "ftest", "ok" : 1 }
accumulator running as PID 5878
accumulator running as PID 5878
Unable to start listening application after waiting 30
Looking to the script we use for travis CI, it seems it disables the the ipv4_ipv6_both.test test (it is just a matter of renaming it with .DISABLED, see https://github.com/telefonicaid/fiware-orion/blob/master/ci/rpm7/build.sh#L99)
I'm not sure of the reason (I'm not the original author of build.sh and I'm afraid the authr, @caa06d9c leave the project some time ago) but maybe IPv6 doesn't work correctly in dockerized environment.
It would be great to debug the issue, but if you don't want to bother with this, probably using the same strategy (i.e. disabling) may work for you.
You need to enable IPv6 support in the Docker daemon. (https://docs.docker.com/config/daemon/ipv6/)
In the case of CentOS, you create a /etc/docker/daemon.json
file as shown:
{
"ipv6": true,
"fixed-cidr-v6": "2001:db8:1::/64"
}
And restart the Docker daemon.
sudo systemctl restart docker
The following is for your information.
I create a docker image in which Orion and Mongodb are. I run the test harness for Orion with the image. The result is here (https://github.com/fisuda/report/blob/master/orion/20200225_orion_aarch64/x86_64/log-20200222_124711.txt).
0029/1137: 0000_ipv6_support/ipv4_ipv6_both.test ........................................................................ 04 seconds
You create a Dockerfile
, test.sh
and mongodb.repo
file as shown. And you build and run a docker image. The test result will be created in the /log directory.
docker build --build-arg CLEAN_DEV_TOOLS=0 -t orion_test .
docker run -d --name orion_test orion_test
docker exec -it orion_test bash
./test.sh
#!/bin/sh
export LANG=C
export LOG=/log/log-`date "+%Y%m%d_%H%M%S"`.txt
> $LOG
cd /opt/fiware-orion
. scripts/testEnv.sh
. /opt/ft_env/bin/activate
echo "=== functional_test ===" >> $LOG
make functional_test INSTALL_DIR=~ | tee -a $LOG
echo "=== valgrind ===" >> $LOG
make valgrind | tee -a $LOG
echo "=== coverage ===" >> $LOG
make coverage INSTALL_DIR=~ | tee -a $LOG
ARG IMAGE_TAG=centos7.6.1810
FROM centos:${IMAGE_TAG}
MAINTAINER FIWARE Orion Context Broker Team. Telefónica I+D
ARG GIT_NAME
ARG GIT_REV_ORION
ARG CLEAN_DEV_TOOLS
ENV ORION_USER orion
ENV GIT_NAME ${GIT_NAME:-telefonicaid}
ENV GIT_REV_ORION ${GIT_REV_ORION:-master}
ENV CLEAN_DEV_TOOLS ${CLEAN_DEV_TOOLS:-1}
WORKDIR /opt
RUN \
adduser --comment "${ORION_USER}" ${ORION_USER} && \
# Install dependencies
yum -y install epel-release && \
yum -y install \
boost-devel \
bzip2 \
cmake \
gnutls-devel \
libgcrypt-devel \
libcurl-devel \
openssl-devel \
libuuid-devel \
make \
nc \
git \
gcc-c++ \
scons \
tar \
which \
cyrus-sasl-devel && \
# Install libmicrohttpd from source
cd /opt && \
curl -kOL http://ftp.gnu.org/gnu/libmicrohttpd/libmicrohttpd-0.9.48.tar.gz && \
tar xvf libmicrohttpd-0.9.48.tar.gz && \
cd libmicrohttpd-0.9.48 && \
./configure --disable-messages --disable-postprocessor --disable-dauth && \
make && \
make install && \
ldconfig && \
# Install mongodb driver from source
cd /opt && \
curl -kOL https://github.com/mongodb/mongo-cxx-driver/archive/legacy-1.1.2.tar.gz && \
tar xfz legacy-1.1.2.tar.gz && \
cd mongo-cxx-driver-legacy-1.1.2 && \
scons --use-sasl-client --ssl && \
scons install --prefix=/usr/local --use-sasl-client --ssl && \
# Install rapidjson from source
cd /opt && \
curl -kOL https://github.com/miloyip/rapidjson/archive/v1.0.2.tar.gz && \
tar xfz v1.0.2.tar.gz && \
mv rapidjson-1.0.2/include/rapidjson/ /usr/local/include && \
# Install orion from source
cd /opt && \
git clone https://github.com/${GIT_NAME}/fiware-orion && \
cd fiware-orion && \
git checkout ${GIT_REV_ORION} && \
make && \
make install && \
# reduce size of installed binaries
strip /usr/bin/contextBroker && \
# create needed log and run paths
mkdir -p /var/log/contextBroker && \
mkdir -p /var/run/contextBroker && \
chown ${ORION_USER} /var/log/contextBroker && \
chown ${ORION_USER} /var/run/contextBroker && \
cd /opt && \
if [ ${CLEAN_DEV_TOOLS} -eq 0 ] ; then yum clean all && exit 0 ; fi && \
# cleanup sources, dev tools, locales and yum cache to reduce the final image size
rm -rf /opt/libmicrohttpd-0.9.48.tar.gz \
/usr/local/include/microhttpd.h \
/usr/local/lib/libmicrohttpd.* \
/opt/libmicrohttpd-0.9.48 \
/opt/legacy-1.1.2.tar.gz \
/opt/mongo-cxx-driver-legacy-1.1.2 \
/usr/local/include/mongo \
/usr/local/lib/libmongoclient.a \
/opt/rapidjson-1.0.2 \
/opt/v1.0.2.tar.gz \
/usr/local/include/rapidjson \
/opt/fiware-orion \
# We don't need to manage Linux account passwords requisites: lenght, mays/mins, etc.
# This cannot be removed using yum as yum uses hard dependencies and doing so will
# uninstall essential packages
/usr/share/cracklib \
# We don't need glibc locale data. This cannot be removed using yum as yum uses hard
# dependencies and doing so will uninstall essential packages
/usr/share/i18n /usr/{lib,lib64}/gconv \
&& \
yum -y erase git perl* rsync \
cmake libarchive \
gcc-c++ cloog-ppl cpp gcc glibc-devel glibc-headers \
kernel-headers libgomp libstdc++-devel mpfr ppl \
scons boost-devel libcurl-devel gnutls-devel libgcrypt-devel \
clang llvm llvm-libs \
CUnit-devel CUnit \
autoconf automake m4 libidn-devel \
boost-wave boost-serialization boost-python \
boost-iostreams boost boost-date-time \
boost-test boost-graph boost-signals \
boost-program-options boost-math \
openssh openssh-clients libedit hwdata dbus-glib fipscheck* *devel sysvinit-tools \
&& \
# Erase without dependencies of the document formatting system (man). This cannot be removed using yum
# as yum uses hard dependencies and doing so will uninstall essential packages
rpm -qa groff | xargs -r rpm -e --nodeps && \
# Clean yum data
yum clean all && rm -rf /var/lib/yum/yumdb && rm -rf /var/lib/yum/history && \
# Rebuild rpm data files
rpm -vv --rebuilddb && \
# Delete unused locales. Only preserve en_US and the locale aliases
find /usr/share/locale -mindepth 1 -maxdepth 1 ! -name 'en_US' ! -name 'locale.alias' | xargs -r rm -r && \
bash -c 'localedef --list-archive | grep -v -e "en_US" | xargs localedef --delete-from-archive' && \
# We use cp instead of mv as to refresh locale changes for ssh connections. We use /bin/cp instead of
# cp to avoid any alias substitution, which in some cases has been problematic
/bin/cp -f /usr/lib/locale/locale-archive /usr/lib/locale/locale-archive.tmpl && \
build-locale-archive && \
# Don't need old log files inside docker images
rm -f /var/log/*log
ADD mongodb.repo /etc/yum.repos.d/
RUN yum install -y python curl nc mongodb-org-shell valgrind bc python-pip && \
pip install --upgrade pip && \
cd /opt/fiware-orion && \
mkdir ~/bin && \
make install_scripts INSTALL_DIR=~ && \
. scripts/testEnv.sh && \
cd /opt && \
pip install virtualenv && \
virtualenv /opt/ft_env && \
. /opt/ft_env/bin/activate && \
pip install Flask==1.0.2 pyOpenSSL==19.0.0 && \
deactivate
ENV PATH=~/bin:$PATH
# Install gmock and lcov
RUN cd /opt && \
curl -O https://src.fedoraproject.org/repo/pkgs/gmock/gmock-1.5.0.tar.bz2/d738cfee341ad10ce0d7a0cc4209dd5e/gmock-1.5.0.tar.bz2 && \
tar xfvj gmock-1.5.0.tar.bz2 && \
cd gmock-1.5.0 && \
./configure && \
make && \
make install && \
ldconfig && \
cd .. && \
rm -fr gmock-1.5.0 && \
cd /opt && \
curl -kOL https://github.com/linux-test-project/lcov/releases/download/v1.12/lcov-1.12.tar.gz && \
tar xfz lcov-1.12.tar.gz && \
cd lcov-1.12/ && \
make install && \
cd .. && \
rm -fr lcov-1.12
RUN yum -y install mongodb-org && \
cd /opt && \
git clone https://github.com/docker-library/mongo.git docker-mongo && \
cd docker-mongo/ && \
git checkout fbaaf63e240b194cc3a05b859611c26b02035abf && \
cp -a /opt/docker-mongo/3.6/docker-entrypoint.sh /usr/local/bin && \
curl -o /usr/local/bin/gosu -sL "https://github.com/tianon/gosu/releases/download/1.11/gosu-amd64" && \
chmod +x /usr/local/bin/gosu && \
gosu nobody true && \
useradd -M -s /bin/false mongodb && \
mkdir -p /data/db /data/configdb && \
chown -R mongodb:mongodb /data/db /data/configdb && \
ln -s /usr/local/bin/docker-entrypoint.sh /entrypoint.sh && \
mv /etc/mongod.conf /etc/mongod.conf.orig && \
mkdir /docker-entrypoint-initdb.d /log
VOLUME /data/db /data/configdb
WORKDIR /
#ENTRYPOINT ["/usr/bin/contextBroker","-fg", "-multiservice", "-ngsiv1Autocast" ]
#EXPOSE 1026
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["mongod"]
STOPSIGNAL SIGINT
COPY test.sh /
[mongodb-org-3.6]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.6/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc
Edit: Add --build-arg CLEAN_DEV_TOOLS=0
You need to enable IPv6 support in the Docker daemon. (https://docs.docker.com/config/daemon/ipv6/) In the case of CentOS, you create a
/etc/docker/daemon.json
file as shown:{ "ipv6": true, "fixed-cidr-v6": "2001:db8:1::/64" }
And restart the Docker daemon.
sudo systemctl restart docker
Thanks for the information! I'll try to apply this solution to the docker CI so maybe we can remove the hack we have there to disable IPv6 test.
I have done a closer look to the CI stuff and I think I was wrong...
The ipv4_ipv6_both.test is not disabled in the travis CI pass. For instance (https://travis-ci.org/telefonicaid/fiware-orion/jobs/655794394?utm_medium=notification&utm_source=github_status):
It seems the aforementioned disabling hack in build.sh script (https://github.com/telefonicaid/fiware-orion/blob/master/ci/rpm7/build.sh#L99) is not applied in travis CI. To apply it, the -j
parameter is used:
-j --jenkins execute fix for jenkins during functional testing (disable ipv6 test)
In .travis.yml, this -j
is not being used. I guess (although I'm not sure) this is done to run the tests in some jenkins instance used by the FIWARE Foundation (@caa06d9c could tell :)
Moreover, in .travis.yml I see:
- if [ "$TEST" != "compliance" ]; then echo '{"ipv6":true,"fixed-cidr-v6":"2001:db8:1::/64"}' | sudo tee /etc/docker/daemon.json; fi
- if [ "$TEST" != "compliance" ]; then sudo service docker restart; fi
which corresponds precisely with the solution @fisuda has described.
Sorry the noise... :)
@fgalan -j
is in use in nightly (release) builds (RPM), at least it was so a bit of time ago. This Jenkins (that supports builds) runs tasks in a cluster, so ipv6 config should be turned on there. @flopezag can help with it, I guess:)
Hi, I try to run the functional test in a docker container. Dockerfile:
entrypoint.sh:
docker-compose.yml:
Every test fails with
FAIL 10 - SHELL-INIT produced output on stderr
Is it not possible to run the test in a container?Edit: content of /tmp/contextBroker.log