Closed adlingepa closed 4 years ago
Hi @adlingepa. Could you try running minikube delete
and then make start
? Seems some resources are not being deleted.
I'll try that, I am working on VM server, I cannot use virtualbox driver for minikube. here is another log tried to check status of reposerver,
$ http --print=b GET http://localhost:8200/api/v1/namespaces/default/services/tuf-reposerver/proxy/health/dependencies { "apiVersion": "v1", "code": 503, "kind": "Status", "message": "no endpoints available for service \"tuf-reposerver\"", "metadata": {}, "reason": "ServiceUnavailable", "status": "Failure" }
Hm, I havent tried this with other drivers, please make sure you are starting from scratch before make start
on your logs looks like services hadn't been deleted properly.
You should be able to check reposerver after changing your /etc/hosts
like describe in the README, with http http://tuf-reposerver.ota.local/health
.
But in any case from that log message it does look like reposerver did not start properly, would need more logs from tuf-reposerver and tuf-reposerver daemon, but I hope this would be solved if you do make start
with a clean environment.
Attaching the complete logs with clean environment make_start_fail.log
Waiting for reposerver Waiting for reposerver Waiting for reposerver Waiting for reposerver HTTP/1.1 503 Service Unavailable Cache-Control: no-cache, private Content-Length: 87 Content-Type: text/plain; charset=utf-8 Date: Wed, 26 Aug 2020 19:11:32 GMT X-Content-Type-Options: nosniff
Error trying to reach service: 'dial tcp 172.18.0.13:9001: connect: connection refused'
[{WIFEXITED(s) && WEXITSTATUS(s) == 5}], 0, NULL) = 113746 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=113746, si_uid=1001, si_status=5, si_utime=2, si_stime=4} --- rt_sigreturn({mask=[]}) = 113746 openat(AT_FDCWD, "/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=2995, ...}) = 0 read(4, "# Locale name alias data base.\n#"..., 4096) = 2995 read(4, "", 4096) = 0 close(4) = 0 openat(AT_FDCWD, "/usr/share/locale/en_US.UTF-8/LC_MESSAGES/make.mo", O_RDONLY) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/share/locale/en_US.utf8/LC_MESSAGES/make.mo", O_RDONLY) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/share/locale/en_US/LC_MESSAGES/make.mo", O_RDONLY) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/share/locale/en.UTF-8/LC_MESSAGES/make.mo", O_RDONLY) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/share/locale/en.utf8/LC_MESSAGES/make.mo", O_RDONLY) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/share/locale/en/LC_MESSAGES/make.mo", O_RDONLY) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/share/locale-langpack/en_US.UTF-8/LC_MESSAGES/make.mo", O_RDONLY) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/share/locale-langpack/en_US.utf8/LC_MESSAGES/make.mo", O_RDONLY) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/share/locale-langpack/en_US/LC_MESSAGES/make.mo", O_RDONLY) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/share/locale-langpack/en.UTF-8/LC_MESSAGES/make.mo", O_RDONLY) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/share/locale-langpack/en.utf8/LC_MESSAGES/make.mo", O_RDONLY) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/share/locale-langpack/en/LC_MESSAGES/make.mo", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=5491, ...}) = 0 mmap(NULL, 5491, PROT_READ, MAP_PRIVATE, 4, 0) = 0x7f1992183000 close(4) = 0 fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 1), ...}) = 0 write(1, "Makefile:13: recipe for target '"..., 46Makefile:13: recipe for target 'start' failed ) = 46 write(2, "make: [start] Error 5\n", 26make: [start] Error 5 ) = 26 rt_sigprocmask(SIG_BLOCK, [HUP INT QUIT TERM XCPU XFSZ], NULL, 8) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 chdir("/u02/rahul/ota-community-edition") = 0 close(1) = 0 exit_group(2) = ? +++ exited with 2 +++ prashanta@hiuyoctovmnew:/u02/rahul/ota-community-edition$
Reposerver health looks OK,
prashanta@hiuyoctovmnew:/tmp$ http http://tuf-reposerver.ota.local/health HTTP/1.1 200 OK Connection: keep-alive Content-Length: 15 Content-Type: application/json Date: Wed, 26 Aug 2020 19:33:24 GMT Server: nginx/1.13.9 x-ats-version: reposerver/0.7.1-22-g1d0d714
{ "status": "OK" }
prashanta@hiuyoctovmnew:/tmp$
One of the services/pods seems to be down but I cannot see which from the logs. What is the result of http http://tuf-reposerver.ota.local/health/dependencies
?
You can check the logs on the containers themselves:
kubectl get pods
kubectl logs <container>
And see which pod did not start properly.
Might also be worthwhile running with DEBUG=true
so we see exactly which call is failling.
result of http http://tuf-reposerver.ota.local/health/dependencies as below,
prashanta@hiuyoctovmnew:~$ http http://tuf-reposerver.ota.local/health/dependencies
HTTP/1.1 503 Service Temporarily Unavailable
Connection: keep-alive
Content-Length: 213
Content-Type: text/html
Date: Thu, 27 Aug 2020 04:51:23 GMT
Server: nginx/1.13.9
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body bgcolor="white">
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx/1.13.9</center>
</body>
</html>
Below is the status of kubectl get pods seems all pods started successfully
prashanta@hiuyoctovmnew:/u02/rahul/ota-community-edition$ kubectl get pods
NAME READY STATUS RESTARTS AGE
app-76cccf7cb4-6lxpk 1/1 Running 3 53m
campaigner-76cf7cc696-mqvdb 1/1 Running 7 53m
campaigner-daemon-6c99974f96-sw752 1/1 Running 0 53m
device-registry-68fbc6d858-vhf9j 1/1 Running 0 53m
device-registry-daemon-6c4b5989cc-2v6vr 1/1 Running 0 53m
director-57d4758bc5-rmwqx 1/1 Running 0 53m
director-daemon-5fdd469fd6-z62nw 1/1 Running 3 53m
gateway-deployment-6567f4d664-djpxv 1/1 Running 0 53m
kafka-0 1/1 Running 0 64m
mysql-0 1/1 Running 0 64m
treehub-0 1/1 Running 2 53m
tuf-keyserver-68c654d575-dqfrg 1/1 Running 1 53m
tuf-keyserver-daemon-5d6c475575-5tj5p 1/1 Running 0 53m
tuf-reposerver-684b7b8f7c-zs25z 1/1 Running 0 53m
web-events-7f8645678-djv6w 1/1 Running 1 53m
zookeeper-0 1/1 Running 0 64m
logs of tuf-reposerver pod as,
prashanta@hiuyoctovmnew:/u02/rahul/ota-community-edition$ kubectl logs tuf-reposerver-684b7b8f7c-zs25z
I|2020-08-27 04:53:01,071|akka.event.slf4j.Slf4jLogger|Slf4jLogger started
W|2020-08-27 04:53:01,534|akka.util.ManifestInfo|Detected possible incompatible versions on the classpath. Please note that a given Akka version MUST be the same across all modules of Akka that you are using, e.g. if you use [2.5.30] all other modules that are released together MUST be of the same version. Make sure you're using a compatible set of libraries. Possibly conflicting versions [2.5.30, 2.5.26] in libraries [akka-protobuf:2.5.30, akka-actor:2.5.30, akka-slf4j:2.5.26, akka-stream:2.5.30]
I|2020-08-27 04:53:01,953|com.zaxxer.hikari.HikariDataSource|database - Started.
I|2020-08-27 04:53:02,328|c.a.libats.slick.db.RunMigrations$|Running migrations
I|2020-08-27 04:53:03,736|o.f.c.i.license.VersionPrinter|Flyway Community Edition 6.0.8 by Redgate
I|2020-08-27 04:53:03,757|c.a.tuf.reposerver.Boot$|Starting reposerver/0.7.1-22-g1d0d714 on http://0.0.0.0:9001
I|2020-08-27 04:53:03,768|c.a.libats.messaging.MessageBus$|Starting messaging mode: Kafka
I|2020-08-27 04:53:03,850|o.f.c.i.database.DatabaseFactory|Database: jdbc:mariadb://mysql:3306/tuf_reposerver (MariaDB 10.3)
I|2020-08-27 04:53:03,946|o.a.k.c.producer.ProducerConfig|ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [kafka:9092]
buffer.memory = 33554432
client.dns.lookup = default
client.id =
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
I|2020-08-27 04:53:04,051|o.f.core.internal.command.DbValidate|Successfully validated 12 migrations (execution time 00:00.099s)
I|2020-08-27 04:53:04,133|o.f.c.i.s.JdbcTableSchemaHistory|Creating Schema History table `tuf_reposerver`.`schema_version` ...
I|2020-08-27 04:53:04,149|o.a.kafka.common.utils.AppInfoParser|Kafka version: 2.4.1
I|2020-08-27 04:53:04,149|o.a.kafka.common.utils.AppInfoParser|Kafka commitId: c57222ae8cd7866b
I|2020-08-27 04:53:04,149|o.a.kafka.common.utils.AppInfoParser|Kafka startTimeMs: 1598503984147
I|2020-08-27 04:53:04,172|c.a.t.r.t.LocalTargetStoreEngine$|Created local fs blob store directory: /tmp/tuf-targets
I|2020-08-27 04:53:04,172|c.a.t.r.t.LocalTargetStoreEngine$|local fs blob store set to /tmp/tuf-targets
I|2020-08-27 04:53:04,435|o.f.core.internal.command.DbMigrate|Current version of schema `tuf_reposerver`: << Empty Schema >>
I|2020-08-27 04:53:04,441|o.f.core.internal.command.DbMigrate|Migrating schema `tuf_reposerver` to version 1 - initial schema
I|2020-08-27 04:53:04,762|c.a.libats.http.tracing.Tracing$|Request tracing disabled in config
I|2020-08-27 04:53:04,955|o.f.core.internal.command.DbMigrate|Migrating schema `tuf_reposerver` to version 2 - add namespace
I|2020-08-27 04:53:05,052|org.apache.kafka.clients.Metadata|[Producer clientId=producer-1] Cluster ID: 0M8GGEHTTC-inmJcOaD-Jg
I|2020-08-27 04:53:05,132|o.f.core.internal.command.DbMigrate|Migrating schema `tuf_reposerver` to version 3 - add custom target fields
I|2020-08-27 04:53:05,259|o.f.core.internal.command.DbMigrate|Migrating schema `tuf_reposerver` to version 4 - use longtext
I|2020-08-27 04:53:05,758|o.f.core.internal.command.DbMigrate|Migrating schema `tuf_reposerver` to version 5 - add storage method
I|2020-08-27 04:53:05,959|o.f.core.internal.command.DbMigrate|Migrating schema `tuf_reposerver` to version 6 - add signed role expires
I|2020-08-27 04:53:06,135|o.f.core.internal.command.DbMigrate|Migrating schema `tuf_reposerver` to version 7 - remove rootjson cache
I|2020-08-27 04:53:06,749|o.f.core.internal.command.DbMigrate|Migrating schema `tuf_reposerver` to version 8 - allow null target uri
I|2020-08-27 04:53:07,106|o.f.core.internal.command.DbMigrate|Migrating schema `tuf_reposerver` to version 9 - filename comments
I|2020-08-27 04:53:07,245|o.f.core.internal.command.DbMigrate|Migrating schema `tuf_reposerver` to version 10 - remove constraints from filename comments
I|2020-08-27 04:53:07,298|o.f.core.internal.command.DbMigrate|Migrating schema `tuf_reposerver` to version 11 - add delegations
I|2020-08-27 04:53:07,498|o.f.core.internal.command.DbMigrate|Migrating schema `tuf_reposerver` to version 12 - add CliManaged target item
I|2020-08-27 04:53:07,704|o.f.core.internal.command.DbMigrate|Successfully applied 12 migrations to schema `tuf_reposerver` (execution time 00:03.272s)
I|2020-08-27 04:53:07,718|c.a.libats.slick.db.RunMigrations$|Ran 12 migrations
I|2020-08-27 04:53:07,718|c.a.tuf.reposerver.Boot$|Finished running migrations
I|2020-08-27 04:53:07,745|c.a.libats.auth.NamespaceDirectives$|Using namespace from default conf extractor
I|2020-08-27 04:53:08,234|c.a.l.h.logging.RequestLoggingActor|http_stime=376 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 04:53:08,680|c.a.l.h.logging.RequestLoggingActor|http_stime=1207 http_query='' http_method=GET http_path=/health/dependencies http_service_name=reposerver http_status=200
I|2020-08-27 04:53:29,128|akka.actor.ActorSystemImpl|Request timeout encountered for request [POST /api/v1/user_repo Empty]
E|2020-08-27 04:53:35,154|akka.actor.ActorSystemImpl|An error occurred. ErrorId: Some(25b3b4d4-6acd-4252-bd27-679b9148d286) {"code":"remote_service_error","description":"KeyserverHttpClient|Unexpected response from remote server at http://tuf-keyserver/api/v1/root/96cf7cac-5537-4737-95fa-680052df3c8b|POST|503|The server was not able to produce a timely response to your request.\r\nPlease try again in a short while!","cause":null,"errorId":"25b3b4d4-6acd-4252-bd27-679b9148d286"}
I|2020-08-27 04:53:35,156|c.a.l.h.logging.RequestLoggingActor|http_stime=26071 http_query='' http_method=POST http_path=/api/v1/user_repo http_service_name=reposerver http_status=502 req_namespace=default
I|2020-08-27 04:53:38,030|c.a.l.h.logging.RequestLoggingActor|http_stime=95 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 04:54:08,332|c.a.l.h.logging.RequestLoggingActor|http_stime=94 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 04:54:37,847|c.a.l.h.logging.RequestLoggingActor|http_stime=4 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 04:55:08,033|c.a.l.h.logging.RequestLoggingActor|http_stime=192 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 04:55:37,945|c.a.l.h.logging.RequestLoggingActor|http_stime=104 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 04:55:38,049|c.a.l.h.logging.RequestLoggingActor|http_stime=236 http_query='' http_method=GET http_path=/health/dependencies http_service_name=reposerver http_status=200
I|2020-08-27 04:55:39,422|c.a.l.h.logging.RequestLoggingActor|http_stime=491 http_query='' http_method=POST http_path=/api/v1/user_repo http_service_name=reposerver http_status=200 req_namespace=default
I|2020-08-27 04:56:08,341|c.a.l.h.logging.RequestLoggingActor|http_stime=307 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 04:56:13,969|c.a.l.h.logging.RequestLoggingActor|http_stime=15323 http_query='' http_method=GET http_path=/api/v1/user_repo/root.json http_service_name=reposerver http_status=200 req_namespace=default
I|2020-08-27 04:56:16,050|c.a.l.h.logging.RequestLoggingActor|http_stime=1106 http_query='' http_method=GET http_path=/api/v1/user_repo/root.json http_service_name=reposerver http_status=200 req_namespace=default
I|2020-08-27 04:56:37,847|c.a.l.h.logging.RequestLoggingActor|http_stime=4 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 04:57:07,846|c.a.l.h.logging.RequestLoggingActor|http_stime=4 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 04:57:37,843|c.a.l.h.logging.RequestLoggingActor|http_stime=3 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 04:58:07,845|c.a.l.h.logging.RequestLoggingActor|http_stime=4 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 04:58:37,847|c.a.l.h.logging.RequestLoggingActor|http_stime=8 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 04:59:08,941|c.a.l.h.logging.RequestLoggingActor|http_stime=4 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 04:59:38,628|c.a.l.h.logging.RequestLoggingActor|http_stime=311 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 04:59:50,321|c.a.l.h.logging.RequestLoggingActor|http_stime=33 http_query='' http_method=GET http_path=/health/dependencies http_service_name=reposerver http_status=200
E|2020-08-27 04:59:51,148|akka.actor.ActorSystemImpl|An error occurred. ErrorId: Some(c47f90ec-36bb-4ab1-beaa-585f291a9474) {"code":"conflicting_entity","description":"Entity already exists: Tuple2","cause":null,"errorId":"c47f90ec-36bb-4ab1-beaa-585f291a9474"}
I|2020-08-27 04:59:51,150|c.a.l.h.logging.RequestLoggingActor|http_stime=95 http_query='' http_method=POST http_path=/api/v1/user_repo http_service_name=reposerver http_status=409 req_namespace=default
I|2020-08-27 05:00:02,132|c.a.l.h.logging.RequestLoggingActor|http_stime=299 http_query='' http_method=GET http_path=/health/dependencies http_service_name=reposerver http_status=200
E|2020-08-27 05:00:02,752|akka.actor.ActorSystemImpl|An error occurred. ErrorId: Some(2f90c37d-0a3e-4be5-a947-76f79ab92fa4) {"code":"conflicting_entity","description":"Entity already exists: Tuple2","cause":null,"errorId":"2f90c37d-0a3e-4be5-a947-76f79ab92fa4"}
I|2020-08-27 05:00:02,830|c.a.l.h.logging.RequestLoggingActor|http_stime=192 http_query='' http_method=POST http_path=/api/v1/user_repo http_service_name=reposerver http_status=409 req_namespace=default
I|2020-08-27 05:00:08,040|c.a.l.h.logging.RequestLoggingActor|http_stime=4 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 05:00:37,936|c.a.l.h.logging.RequestLoggingActor|http_stime=7 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 05:01:07,845|c.a.l.h.logging.RequestLoggingActor|http_stime=3 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
I|2020-08-27 05:01:37,842|c.a.l.h.logging.RequestLoggingActor|http_stime=2 http_query='' http_method=GET http_path=/health http_service_name=reposerver http_status=200
prashanta@hiuyoctovmnew:/u02/rahul/ota-community-edition$
After enabling DEBUG=true, I come to know that package zip is not installed,
+ http --ignore-stdin --check-status -d -o /u02/rahul/ota-community-edition/scripts/../generated/ota.ce/root.json GET http://localhost:8200/api/v1/namespaces/default/services/tuf-reposerver/proxy/api/v1/user_repo/root.json x-ats-namespace:default
HTTP/1.1 200 OK
Cache-Control: no-cache, private
Content-Length: 3281
Content-Type: application/json
Date: Thu, 27 Aug 2020 04:56:16 GMT
Server: akka-http/10.1.11
X-Ats-Role-Checksum: 04e2a0c03686fcde1f495c791836319f7cc346fbf8a2b3258bf3c2f4601616ca
X-Ats-Tuf-Repo-Id: 2459d350-eb48-4f17-b100-09fb08645b45
X-Ats-Version: reposerver/0.7.1-22-g1d0d714
Downloading 3.20 kB to "/u02/rahul/ota-community-edition/scripts/../generated/ota.ce/root.json"
Done. 3.20 kB in 0.00063s (4.98 MB/s)
+ echo http://tuf-reposerver.ota.local
+ echo https://ota.ce:30443
+ cat
+ zip --quiet --junk-paths /u02/rahul/ota-community-edition/scripts/../generated/ota.ce/credentials.zip /u02/rahul/ota-community-edition/scripts/../generated/ota.ce/autoprov.url /u02/rahul/ota-community-edition/scripts/../generated/ota.ce/server_ca.pem /u02/rahul/ota-community-edition/scripts/../generated/ota.ce/tufrepo.url /u02/rahul/ota-community-edition/scripts/../generated/ota.ce/targets.pub /u02/rahul/ota-community-edition/scripts/../generated/ota.ce/targets.sec /u02/rahul/ota-community-edition/scripts/../generated/ota.ce/treehub.json /u02/rahul/ota-community-edition/scripts/../generated/ota.ce/root.json
scripts/start.sh: line 263: zip: command not found
+ kill_pid 1021705
+ local pid=1021705
+ kill -0 1021705
+ kill -9 1021705
Makefile:13: recipe for target 'start' failed
make: *** [start] Error 127
prashanta@hiuyoctovmnew:/u02/rahul/ota-community-edition$
I installed zip using apt install zip, and tried again, still the same http error 409
+ retry_command reposerver '[[ true = $(http --print=b GET http://localhost:8200/api/v1/namespaces/default/services/tuf-reposerver/proxy/health/dependencies | jq --exit-status '\''.status == "OK"'\'') ]]'
+ local name=reposerver
+ local 'command=[[ true = $(http --print=b GET http://localhost:8200/api/v1/namespaces/default/services/tuf-reposerver/proxy/health/dependencies | jq --exit-status '\''.status == "OK"'\'') ]]'
+ local n=0
+ local max=100
+ true
+ eval '[[ true = $(http --print=b GET http://localhost:8200/api/v1/namespaces/default/services/tuf-reposerver/proxy/health/dependencies | jq --exit-status '\''.status == "OK"'\'') ]]'
+ return 0
++ http --ignore-stdin --check-status --print=b POST http://localhost:8200/api/v1/namespaces/default/services/tuf-reposerver/proxy/api/v1/user_repo x-ats-namespace:default
++ jq --raw-output .
http: warning: HTTP 409 Conflict
+ id='{
"code": "conflicting_entity",
"description": "Entity already exists: Tuple2",
"cause": null,
"errorId": "2f90c37d-0a3e-4be5-a947-76f79ab92fa4"
}'
+ kill_pid 1032562
+ local pid=1032562
+ kill -0 1032562
+ kill -9 1032562
Makefile:13: recipe for target 'start' failed
make: *** [start] Error 4
Attached the the complete logs,
Thanks, Prashant make_start_fail_2.log
thanks for the info.
The second error is because the repo already exists, which means the second run was not on a clean environment. You can either clean and try again, after having zip installed, or you can update to latest master, I now made this idempotent so it should continue if the repo already exists.
Thanks for Update @simao
Worked with the latest commit #4f0c895, Thank you for your support.
Thanks for the issue report!
Hello Support Team, Greetings,
I am unable to start the OTA CE server. The make start script is failing with http error 409 I am using minikube with driver as docker ota-community-edition - latest commit #187e82e
below are the log ... service/tuf-reposerver unchanged configmap/web-events-config unchanged deployment.apps/web-events unchanged ingress.extensions/web-events configured secret/web-events-secret unchanged service/web-events unchanged Starting to serve on 127.0.0.1:8200
http: warning: HTTP 409 Conflict Makefile:13: recipe for target 'start' failed make: *** [start] Error 4