Closed AlexanderNeumann closed 3 years ago
### Reproduction steps for the issue:
Create a las2peer main bootstrap deployment (with las2peer version 1.1.1) and a ethnet for las2peer with following example config files. Finally bootstrap a las2peer service which tries to announce its deployment, like with the config file below.
```yaml kind: Deployment apiVersion: apps/v1 metadata: name: las2peer-bootstrap namespace: YOUR_NAMESPACE spec: replicas: 1 selector: matchLabels: io.kompose.service: las2peer-bootstrap template: metadata: creationTimestamp: null labels: io.kompose.service: las2peer-bootstrap spec: volumes: - name: node-info-volume configMap: name: node-info defaultMode: 420 containers: - name: las2peer image: registry.tech4comp.dbis.rwth-aachen.de/rwthacis/las2peer ports: - containerPort: 9000 protocol: TCP - containerPort: 8080 protocol: TCP - containerPort: 8001 protocol: TCP env: - name: LAS2PEER_ETH_HOST value: las2peer-ethnet:8545 - name: NODE_ID_SEED value: '1' - name: LAS2PEER_PORT value: '9000' resources: {} volumeMounts: - name: node-info-volume mountPath: /app/las2peer/etc/nodeInfo.xml subPath: nodeInfo.xml livenessProbe: httpGet: path: /las2peer/webapp/welcome port: 32010 host: 137.226.232.175 scheme: HTTP initialDelaySeconds: 300 timeoutSeconds: 180 periodSeconds: 300 successThreshold: 1 failureThreshold: 1 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: Always restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst securityContext: {} affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - tech4compslave2 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% revisionHistoryLimit: 10 progressDeadlineSeconds: 600 ```
```yaml kind: Deployment apiVersion: apps/v1 metadata: name: las2peer-ethnet namespace: YOUR_NAMESPACE spec: replicas: 1 selector: matchLabels: io.kompose.service: las2peer-ethnet template: metadata: creationTimestamp: null labels: io.kompose.service: las2peer-ethnet spec: containers: - name: ethereum image: tjanson/go-ethereum:monitored-client ports: - containerPort: 30303 protocol: TCP - containerPort: 30303 protocol: UDP - containerPort: 8545 protocol: TCP - containerPort: 8546 protocol: TCP env: - name: GETH_VERBOSITY value: '1' resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst securityContext: {} affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - tech4compslave2 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% revisionHistoryLimit: 10 progressDeadlineSeconds: 600 ```
```yaml kind: Deployment apiVersion: apps/v1 metadata: name: las2peer-bootstrap-2 namespace: YOUR_NAMESPACE spec: replicas: 1 selector: matchLabels: io.kompose.service: las2peer-bootstrap-2 template: metadata: creationTimestamp: null labels: io.kompose.service: las2peer-bootstrap-2 spec: volumes: - name: node-info-volume configMap: name: node-info defaultMode: 420 containers: - name: las2peer image: erdzan12/mycodegen ports: - containerPort: 9000 protocol: TCP - containerPort: 8080 protocol: TCP - containerPort: 8001 protocol: TCP env: - name: HTTP_PORT value: '8080' - name: HTTPS_PORT value: '8443' - name: LAS2PEER_PORT value: '9011' - name: SERVICE_PASSPHRASE value: someNewPass - name: GIT_USER value: someuser - name: GIT_PASSWORD value: shajbdhasbdsajhdsajhbdsad - name: TOKEN value: jkasndhjsbadjhsabdsahb - name: GIT_USER_MAIL value: jhsabdhjabsdbhsa@ajhsbdsahjbdashjb.com - name: GIT_ORGANIZATION value: CAETESTRWTH - name: TEMPLATE_REPOSITORY value: CAE-Templates - name: DEPLOYMENT_REPO value: CAE-Deployment-Temp - name: JENKINS_URL value: https://cae-dev.tech4comp.dbis.rwth-aachen.de/jenkins - name: JENKINS_JOB_TOKEN value: 1235f18c-561d-4c6b-8ffa-4ddmnssjbdahhjbea - name: BUILD_JOB_NAME value: Build-Job - name: DOCKER_JOB_NAME value: Docker-Job - name: USED_GIT_HOST value: GitHub - name: BASE_URL value: https://github.com - name: WIDGET_HOME_BASE_URL value: https://CAETESTRWTH.github.io/ - name: OIDC_PROVIDER value: https://api.learning-layers.eu/o/oauth2 - name: LAS2PEER_CONFIG_ENDPOINT value: las2peer-bootstrap:8001 - name: LAS2PEER_BOOTSTRAP value: las2peer-bootstrap:9000 - name: LAS2PEER_ETH_HOST value: las2peer-ethnet:8545 - name: NODE_ID_SEED value: '2' - name: LAS2PEER_PORT value: '9000' resources: {} volumeMounts: - name: node-info-volume mountPath: /app/las2peer/etc/nodeInfo.xml subPath: nodeInfo.xml terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: Always restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst securityContext: {} affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - tech4compslave2 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% revisionHistoryLimit: 10 progressDeadlineSeconds: 600 ```
After a couple of unsuccessful service announcement attempts the user registration is not possible due to the unsynchronised nonce local nonce and the actual nonce of the account used.
### Proposed fix:
Therefore, the fix in the branch issue-70-nonce-out-of-syn tries to synchronise the nonce, allowing user registration to be possible.
Sometimes the Blockchain gets out of sync with the number of transactions for a wallet and new entries fail. I think it is related to faulty contracts that just fail.. (e.g. service registration of services which are not compatible with the current blockchain implementation).
As a workaround we have different wallets for the contract types (faulty ones like service registrations and user related contracts) In our setup:
related console log: