Open hillbun opened 2 years ago
I also encountered such problems, and I also found that the same version of the chart, such as adding another value file values-poc.yaml, when valueFiles is values-poc.yaml, will report an error values-poc.yaml during deployment: no such file or directory
Looks like I have the same issue my application.yaml looks like this:
kind: Application
metadata:
name: istiod
namespace: argocd
spec:
source:
targetRevision: 1.13.3
helm:
values: |-
global:
hub: gcr.io/istio-release
tag: 1.13.3
meshConfig:
accessLogFile: /dev/stdout
ingressService: istio-ingress
ingressSelector: ingress
When I change values for helm chart nothing happen, even if I do sync, I still have the same old ConfigMap for Istio.
I have observed this as well. From my initial cursory testing it seems that a hard-refresh fixes things. E.g.:
argocd app get test --hard-refresh
FWIW I just tested with v2.4.0-rc5+b84dd8b and I hit it there as well
Same issues here, I have Argo CD : v2.0.5 When I update some setting in my values.yaml I can see are updated inside the app, but don't run any update. That confirm me Argo read the correct values.yaml I tried to do hard refresh, but don't work. After that I remove one deployment object and did a rsync, the object was created but don't update the settings.
I feel it's relational with number of "rev:1" like I need to increase the number of revision in some way.
Does anyone have a public repo I can test against?
I just tried with 2.4.2, and changing replicaCount
was reflected in the UI with a simple "Refresh." https://github.com/crenshaw-dev/argocd-example-apps/commits/values-change
(Aside @EnriqueHormilla and @hillbun I'd upgrade ASAP if your instance is anything other than a test instance, there are 11 CVEs against 2.0.5 and 2.1.7.)
Hi @crenshaw-dev thanks for the support. I know the change is reflected in the UI, but despite that, argoCD doesn't do anything with this change. In this picture, i have configured argo in my repo like that:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: apache-custom
namespace: argocd
spec:
destination:
namespace: apache-custom
server: https://kubernetes.default.svc
project: default
source:
helm:
valueFiles:
- values-stg.yaml
path: path/apache-custom
repoURL: git@github.com:privateRepo/repo.git
targetRevision: test
In my values-stg.yaml I have replicaCount :6, but I have only one pod.
If I configured Argo directly to helm repo:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: demo
namespace: argocd
spec:
project: default
source:
chart: apache
helm:
parameters:
- name: replicaCount
value: '6'
repoURL: 'https://charts.bitnami.com/bitnami'
targetRevision: 9.1.11
destination:
server: https://kubernetes.default.svc
namespace: apache-custom
syncPolicy:
automated: {}
Its work and the deployment increase the pods.
Hi @crenshaw-dev finally I did the workaround suggest for the community in https://github.com/argoproj/argo-cd/issues/2789#issuecomment-1167345080 .
@EnriqueHormilla can you clarify what you mean by "argoCD doesn't do anything with this change"? When I change the replicaCount
, not only is it reflected in the UI, but when I sync the replicaCount
is matched by the number of pods.
The plugin workaround should be unnecessary. Generally I avoid plugin hacks when the built-in config management tool can do the job. Plugins are more difficult to secure, and you may have to re-implement features that are already baked into the default implementation.
Hi @crenshaw-dev, I agree with you to try to avoid used custom plugins, but in the tests I did don't work. I want to have a custom external value file for my helm chart (don't use the value.yaml from helm repo), I don't want to write in the Argo app definition my custom values as second example. I want as the first example. ArgoCD can see the values-stg.yaml and see the changes, but these changes aren't reflected in the app. No matter what I change, argoCD don't redeploy the app. In the before example both have the same custom values, (replicaCount) and as you can see the first don't “read” the value and don't add more pods.
Example: I change replica Count to 6, I can see in the argoCD GUI replicaCount6, but argoCD don't create more pods.
Ah, that use case will require #2789 to be resolved. There's currently a draft PR, and it's planned for 2.5 in August.
Ah, that use case will require #2789 to be resolved. There's currently a draft PR, and it's planned for 2.5 in August.
Thanks @crenshaw-dev , good news!
@crenshaw-dev
current release is v2.4.11, so this bug is not fixed yet?
I am grateful to have found this thread as I was going a bit crazy thinking I am doing something somehow wrong. 😄
Can confirm in our own setup that only values.yaml is read and it doesnt bother reading even from inline values:
as such
source:
repoURL: git@github.blah/helm.git
path: kubernetes/helm/bootstrap
helm:
#the below values are merrily ignored
values: |
testFeature:
enabled: true
overlay: non-production
Edit:
kubectl apply -f application.yaml
works.
will use that as a workaround until the fix is released :)
@xgt001 what exactly is in application.yaml
in your case?
I've installed a helm chart from a public repository. Now I want to change values.yaml
and expect that argocd re-deploys pods, which is not the case.
@crenshaw-dev I'm also running into this issue.
I'm running argocd@v2.4.0+91aefab
argocd: v2.4.0+91aefab
BuildDate: 2022-06-10T17:23:37Z
GitCommit: 91aefabc5b213a258ddcfe04b8e69bb4a2dd2566
GitTreeState: clean
GoVersion: go1.18.3
Compiler: gc
Platform: linux/amd64
argocd-server: v2.4.0+91aefab
BuildDate: 2022-06-10T17:23:37Z
GitCommit: 91aefabc5b213a258ddcfe04b8e69bb4a2dd2566
GitTreeState: clean
GoVersion: go1.18.3
Compiler: gc
Platform: linux/amd64
Kustomize Version: v4.4.1 2021-11-11T23:36:27Z
Helm Version: v3.8.1+g5cb9af4
Kubectl Version: v0.23.1
Jsonnet Version: v0.18.0
and the application configuration yaml (datadog
helm chart in this case):
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: datadog
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: "default"
source:
repoURL: https://helm.datadoghq.com
targetRevision: 3.1.3
chart: datadog
helm:
values: |
<content-of-my-values.yaml>
destination:
server: https://kubernetes.default.svc
namespace: datadog
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
retry:
limit: 1
backoff:
duration: 5s
factor: 2
maxDuration: 1m
when I modified the content inside the content-of-my-values.yaml
block (.spec.source.helm.values
) the app is marked as "synced" but the last applied configuration does not pick up my changes.
I checked argocd app get datadog -oyaml
and .status.history[0]
has the correct timestamp (days after I pushed a removal-only configuration change) but .status.history[0].source.helm.values
was still using the revision before the push.
I noticed repo-server
logs mentioning cache hit but I guess that's for the chart repo, which doesn't affect this. That lead me to believe the argocd server is responsible for the stale copy of the configuration.
the correct fix has been postponed to v2.6 (reference)
an alternative solution: https://github.com/argoproj/argo-cd/issues/2789#issuecomment-1176820186
I believe I'm still experiencing this issue with v2.6.3 using in-line Helm values similar to:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: <redacted>
namespace: argo-cd-dev
spec:
project: <redacted>
source:
repoURL: <redacted>
targetRevision: <redacted>
chart: <redacted>
helm:
values: |
server:
replicaCount: 0
# Many more lines of deeply-nested YAML...
destination:
name: k8s-dev
namespace: <redacted>
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- PrunePropagationPolicy=background
Editing spec.source.helm.values
in either Git (application is set to auto sync) or the Argo UI does not cause the replica count of my deployment to change. Initiating a forceful sync via the Argo UI afterward does cause the replica count to change.
@emmercm Until this is solved, here's what I did:
values.yaml
in that repository directly and argocd will correctly update deployementsI just wanted to leave an idea here as well. The company I work for, we were definitely stuck with this problem for a while now.
The way we we've fixed it was to completely ditch the native helm integration and create our own plugin (https://argo-cd.readthedocs.io/en/stable/operator-manual/config-management-plugins/). This plugin has some smarts around to fix some of the use cases we needed. E.g: Helm recursive dependencies, multi values files, multiple sources, mixed applications with Helm + Kustomize, etc.
I reckon if you're not afraid to write software, that is a much more complete solution instead of waiting for the open source to catch up and fix all your edge cases.
Not sure if this is relevant to anyone but was the fix I had for my situation. I created a sub chart that had prometheus as a dependency. I had a values file with the values from the prometheus chart with my sub chart. The values would never update, they would always have the prometheus defaults. I had configured the values file wrong for the dependency chart. the prometheus chart needs to have its values in the sub-chart values file formatted like so:
prometheus:
<prometheus chart values>:
<another dependency chart>:
<values>...
After that they updated as expected
this is still problem :(
Hitting this issue as well right now.
same issue here for us too.
For the life of me, I cannot reproduce this. When I change the values, I immediately see the app go out of sync.
https://github.com/argoproj/argo-cd/assets/350466/8338e124-d9c3-4dec-bf28-500eebe8e6fb
Same when I use a Helm repo instead of a Helm chart hosted in git.
https://github.com/argoproj/argo-cd/assets/350466/b3639302-6193-4643-87a1-8e7790d41c94
Here's another little use case to try and help diagnose this.
I am experiencing this problem in ArgoCD 2.8.2, and in my case the values is updating a Job which has already completed. I tried various options like forcing a Replace=true
on the sync-option annotations, I have even gone as far as naming the job with something from the values.yaml file.
Set-up:
What happens:
Now I click on the Sync button.
The Sync popup overlay shows all other resources in the Synchronize Resources section at the bottom except the Job which I am trying to update/replace.
I wonder if it has something to do with the fact that the Job has completed and is now no longer seen as "updateable".
But, when I then click on Synchronize (which I presume triggers a hard sync) the new commit is actually deployed, and the replacement job does appear and run.
I hope that helps.
UPDATE
Yes, I think I may be onto something here.
I have also updated a config map from the same dependency Helm chart to have an annotation that includes the value I am updating, and that triggers a resync and correctly creates a new job.
Hi all, I made another workaround for this:
server:
extraContainers:
- name: argocd-hard-refresher
image: bitnami/kubectl:1.26.9
command:
- /bin/sh
- -c
args:
- |
while kubectl annotate application.argoproj.io -A -l hard-refresh=true argocd.argoproj.io/refresh=hard --overwrite >/dev/null; do
sleep 60
done
Now any app that has hard-refresh=true
label with be hard refreshed for every 60 seconds
Is it any update about this issue?
still having this issue
really weird, we've been using argocd for almost a year with no sync issues, sometimes I had to use hard refresh. Today came across this issue, for no apparent reason it just won't recognise new image version I'm trying to push - UI shows all in sync and displays correct commit sha but uses old version.
I've also just suddenly started experiencing this issue today,
It was fine last week, It was just as I went to demo Argo to my team when it started breaking, sorry all, looks like the Demo Gods have broken Argo for everybody.
Greetings,
I'm currently facing an issue with my setup. I've installed Argo CD version v2.9.1
along with Helm Chart version 5.51.4
on my cluster. Additionally, I've installed Argo CD Image Updater version v0.12.2
with Helm Chart version 0.9.1
to manage the deployment of new program versions. My configuration in Argo CD Image Updater employs the argocd method
for staging and the git-write-back method
for production.
This is my Application config file:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
annotations:
argocd-image-updater.argoproj.io/image-list: 'cephs3sync={{ cephs3sync_image_url }}'
argocd-image-updater.argoproj.io/cephs3sync.pull-secret: 'pullsecret:argocd/registry'
argocd-image-updater.argoproj.io/cephs3sync.update-strategy: '{{ 'latest' if stagingcluster is defined and stagingcluster | default(false) else 'semver' }}'
argocd-image-updater.argoproj.io/cephs3sync.force-update: 'true'
{% if not stagingcluster | default(false) %}
argocd-image-updater.argoproj.io/cephs3sync.allow-tags: 'regexp:^[0-9]+(\.[0-9]+){0,2}$'
{% endif %}
finalizers:
- resources-finalizer.argocd.argoproj.io
name: cephs3sync
namespace: argocd
spec:
project: default
source:
helm:
valueFiles:
- values.yaml
parameters:
- name: image.repository
value: '{{ cephs3sync_image_url }}'
- name: image.tag
value: '{{ cephs3sync_image_tag }}'
repoURL: https://{{ git_url }}/iaas/cephs3sync.git
path: charts
targetRevision: '{{ 'dev' if stagingcluster is defined and stagingcluster | default(false) else 'main' }}'
destination:
server: https://kubernetes.default.svc
namespace: interface
syncPolicy:
{% if stagingcluster is defined and stagingcluster | default(false) %}
automated:
selfHeal: true
prune: true
allowEmpty: false
{% endif %}
syncOptions:
- CreateNamespace=true
- Validate=true
- PruneLast=true
- PrunePropagationPolicy=foreground
- Replace=true
- ApplyOutOfSyncOnly=true
While the first method(argocd method) operates smoothly, I've encountered an issue with the second method(git-write-back method) where the program doesn't reflect any changes in the Argo CD UI. After thorough investigation, I reviewed the logs of all containers, and there were no apparent errors or problems.
Furthermore, I've noted that there might be additional insights in the Argo CD Image Updater logs during the notice and redeployment of the new version of my app.
time="2023-12-05T14:52:28Z" level=info msg="Successfully updated image 'reg.arr.com/interface/prod/cephs3sync:0.4.2' to 'reg.arr.com/interface/prod/cephs3sync:0.4.3', but pending spec update (dry run=false)" alias=cephs3sync application=cephs3sync image_name=interface/prod/cephs3sync image_tag=0.4.2 registry=reg.arr.com
time="2023-12-05T14:52:28Z" level=info msg="Committing 1 parameter update(s) for application cephs3sync" application=cephs3sync
time="2023-12-05T14:52:28Z" level=info msg="Initializing https://git.arr.com/iaas/cephs3sync.git to /tmp/git-cephs3sync3949202410"
time="2023-12-05T14:52:28Z" level=info msg="rm -rf /tmp/git-cephs3sync3949202410" dir= execID=443e4
time="2023-12-05T14:52:28Z" level=info msg=Trace args="[rm -rf /tmp/git-cephs3sync3949202410]" dir= operation_name="exec rm" time_ms=1.803898
time="2023-12-05T14:52:28Z" level=info msg="git fetch origin --tags --force" dir=/tmp/git-cephs3sync3949202410 execID=12bd4
time="2023-12-05T14:52:29Z" level=info msg=Trace args="[git fetch origin --tags --force]" dir=/tmp/git-cephs3sync3949202410 operation_name="exec git" time_ms=1043.440004
time="2023-12-05T14:52:29Z" level=info msg="git config user.name argocd-image-updater" dir=/tmp/git-cephs3sync3949202410 execID=2adbf
time="2023-12-05T14:52:29Z" level=info msg=Trace args="[git config user.name argocd-image-updater]" dir=/tmp/git-cephs3sync3949202410 operation_name="exec git" time_ms=3.326561
time="2023-12-05T14:52:29Z" level=info msg="git config user.email noreply@argoproj.io" dir=/tmp/git-cephs3sync3949202410 execID=37e0b
time="2023-12-05T14:52:29Z" level=info msg=Trace args="[git config user.email noreply@argoproj.io]" dir=/tmp/git-cephs3sync3949202410 operation_name="exec git" time_ms=8.85276
time="2023-12-05T14:52:29Z" level=info msg="git checkout --force argocd" dir=/tmp/git-cephs3sync3949202410 execID=3bd3a
time="2023-12-05T14:52:29Z" level=info msg=Trace args="[git checkout --force argocd]" dir=/tmp/git-cephs3sync3949202410 operation_name="exec git" time_ms=30.662612999999997
time="2023-12-05T14:52:29Z" level=info msg="git clean -fdx" dir=/tmp/git-cephs3sync3949202410 execID=b80e3
time="2023-12-05T14:52:29Z" level=info msg=Trace args="[git clean -fdx]" dir=/tmp/git-cephs3sync3949202410 operation_name="exec git" time_ms=5.5891530000000005
time="2023-12-05T14:52:29Z" level=info msg="Successfully updated the live application spec" application=cephs3sync
time="2023-12-05T14:52:29Z" level=info msg="Processing results: applications=1 images_considered=1 images_skipped=0 images_updated=1 errors=0"
As you can see through the above logs, the Argo CD Image Updater appears to be working fine with no problems.
Additionally, in the photo below, you will notice that even in the Argo CD UI, the changes are displayed, indicating that the version of the program has transitioned from 0.4.2 to 0.4.3. However, despite this UI indication, the program is not deployed to the next version.
Does anyone have any insights or help for my problem?
This is happening to me too, exactly as described here: https://github.com/argoproj/argo-cd/issues/9214#issuecomment-1847177438
What is my probem?
Greetings,
I'm currently facing an issue with my setup. I've installed Argo CD version
v2.9.1
along with Helm Chart version5.51.4
on my cluster. Additionally, I've installed Argo CD Image Updater versionv0.12.2
with Helm Chart version0.9.1
to manage the deployment of new program versions. My configuration in Argo CD Image Updater employs theargocd method
for staging and thegit-write-back method
for production.Application
This is my Application config file:
apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: argocd-image-updater.argoproj.io/image-list: 'cephs3sync={{ cephs3sync_image_url }}' argocd-image-updater.argoproj.io/cephs3sync.pull-secret: 'pullsecret:argocd/registry' argocd-image-updater.argoproj.io/cephs3sync.update-strategy: '{{ 'latest' if stagingcluster is defined and stagingcluster | default(false) else 'semver' }}' argocd-image-updater.argoproj.io/cephs3sync.force-update: 'true' {% if not stagingcluster | default(false) %} argocd-image-updater.argoproj.io/cephs3sync.allow-tags: 'regexp:^[0-9]+(\.[0-9]+){0,2}$' {% endif %} finalizers: - resources-finalizer.argocd.argoproj.io name: cephs3sync namespace: argocd spec: project: default source: helm: valueFiles: - values.yaml parameters: - name: image.repository value: '{{ cephs3sync_image_url }}' - name: image.tag value: '{{ cephs3sync_image_tag }}' repoURL: https://{{ git_url }}/iaas/cephs3sync.git path: charts targetRevision: '{{ 'dev' if stagingcluster is defined and stagingcluster | default(false) else 'main' }}' destination: server: https://kubernetes.default.svc namespace: interface syncPolicy: {% if stagingcluster is defined and stagingcluster | default(false) %} automated: selfHeal: true prune: true allowEmpty: false {% endif %} syncOptions: - CreateNamespace=true - Validate=true - PruneLast=true - PrunePropagationPolicy=foreground - Replace=true - ApplyOutOfSyncOnly=true
While the first method(argocd method) operates smoothly, I've encountered an issue with the second method(git-write-back method) where the program doesn't reflect any changes in the Argo CD UI. After thorough investigation, I reviewed the logs of all containers, and there were no apparent errors or problems.
Argo CD Image Updater Logs
Furthermore, I've noted that there might be additional insights in the Argo CD Image Updater logs during the notice and redeployment of the new version of my app.
time="2023-12-05T14:52:28Z" level=info msg="Successfully updated image 'reg.arr.com/interface/prod/cephs3sync:0.4.2' to 'reg.arr.com/interface/prod/cephs3sync:0.4.3', but pending spec update (dry run=false)" alias=cephs3sync application=cephs3sync image_name=interface/prod/cephs3sync image_tag=0.4.2 registry=reg.arr.com time="2023-12-05T14:52:28Z" level=info msg="Committing 1 parameter update(s) for application cephs3sync" application=cephs3sync time="2023-12-05T14:52:28Z" level=info msg="Initializing https://git.arr.com/iaas/cephs3sync.git to /tmp/git-cephs3sync3949202410" time="2023-12-05T14:52:28Z" level=info msg="rm -rf /tmp/git-cephs3sync3949202410" dir= execID=443e4 time="2023-12-05T14:52:28Z" level=info msg=Trace args="[rm -rf /tmp/git-cephs3sync3949202410]" dir= operation_name="exec rm" time_ms=1.803898 time="2023-12-05T14:52:28Z" level=info msg="git fetch origin --tags --force" dir=/tmp/git-cephs3sync3949202410 execID=12bd4 time="2023-12-05T14:52:29Z" level=info msg=Trace args="[git fetch origin --tags --force]" dir=/tmp/git-cephs3sync3949202410 operation_name="exec git" time_ms=1043.440004 time="2023-12-05T14:52:29Z" level=info msg="git config user.name argocd-image-updater" dir=/tmp/git-cephs3sync3949202410 execID=2adbf time="2023-12-05T14:52:29Z" level=info msg=Trace args="[git config user.name argocd-image-updater]" dir=/tmp/git-cephs3sync3949202410 operation_name="exec git" time_ms=3.326561 time="2023-12-05T14:52:29Z" level=info msg="git config user.email noreply@argoproj.io" dir=/tmp/git-cephs3sync3949202410 execID=37e0b time="2023-12-05T14:52:29Z" level=info msg=Trace args="[git config user.email noreply@argoproj.io]" dir=/tmp/git-cephs3sync3949202410 operation_name="exec git" time_ms=8.85276 time="2023-12-05T14:52:29Z" level=info msg="git checkout --force argocd" dir=/tmp/git-cephs3sync3949202410 execID=3bd3a time="2023-12-05T14:52:29Z" level=info msg=Trace args="[git checkout --force argocd]" dir=/tmp/git-cephs3sync3949202410 operation_name="exec git" time_ms=30.662612999999997 time="2023-12-05T14:52:29Z" level=info msg="git clean -fdx" dir=/tmp/git-cephs3sync3949202410 execID=b80e3 time="2023-12-05T14:52:29Z" level=info msg=Trace args="[git clean -fdx]" dir=/tmp/git-cephs3sync3949202410 operation_name="exec git" time_ms=5.5891530000000005 time="2023-12-05T14:52:29Z" level=info msg="Successfully updated the live application spec" application=cephs3sync time="2023-12-05T14:52:29Z" level=info msg="Processing results: applications=1 images_considered=1 images_skipped=0 images_updated=1 errors=0"
As you can see through the above logs, the Argo CD Image Updater appears to be working fine with no problems.
Argo CD UI
Additionally, in the photo below, you will notice that even in the Argo CD UI, the changes are displayed, indicating that the version of the program has transitioned from 0.4.2 to 0.4.3. However, despite this UI indication, the program is not deployed to the next version.
Does anyone have any insights or help for my problem?
The logs look good. I fixed it by adding another annotation. (I used it with helm). here is the documentation I added the image-name and image-tag. https://argocd-image-updater.readthedocs.io/en/stable/configuration/images/#specifying-helm-parameter-names
I haven't experienced this issue on Argo, but from the initial description it sounds like behavior related to the helm flags --reset-values
and --reuse-values
which are slightly whacky, see: https://github.com/helm/helm/issues/8085
same
Interesting 2 year long thread... I initially was doing the wrapped chart approach and thought it felt a little "hacky". now i too have the same issue.
Long thread... I thought I was doing something wrong (or going crazy).
I noticed if I rename the values file referenced, it is being reflected by ArgoCD.
My workaround was to create a pipeline (azdo) to change the values file name but quickly found out this is not a feasible solution as my GIT history is becoming rather long... and the pipeline is taking a lot of time too.. so not good for development.
Following the thread, interested in a long-term fix.
I haven't experienced this issue on Argo, but from the initial description it sounds like behavior related to the helm flags
--reset-values
and--reuse-values
which are slightly whacky, see: helm/helm#8085
Do you know how we can use them using the Helm Declarative way ?
I'm currently experiencing this on a fresh cluster, I too was questioning my existence for a moment until I ran into this thread.
Is there a workaround or temporary fix for this?
I'm currently experiencing this on a fresh cluster, I too was questioning my existence for a moment until I ran into this thread.
Is there a workaround or temporary fix for this?
I found renaming the file works. As a workaround of course...
Hi. I had similar problem, but then i realized that I had some override values in argocd parameters.
Hard refresh helped me in here. but still this is a big issue.
I'm also seeing this on applications that are pulling a helm chart and setting some valuesObject
. In this instance i'm updating a value for the aws-load-balancer-controller. If I update a helm parameter in the valuesObject
section and merge that change (with autosync turned on) nothing happens. The only way to get around this is to apply that application.yaml to the cluster or do a hard refresh.
argocd v2.1.7
When I create application connect to helm repo, when values file in helm repo is changed, application do not update, even I click sync or refresh.
Application only changed when I click hard refresh.
How to solve it?