Closed imthe-1 closed 1 year ago
Thank you for submitting your first issue to this repository! A maintainer will be here shortly to triage and review. In the meantime, please double-check that you have provided all the necessary information to make this process easy! Any information that can help save additional round trips is useful! We currently aim to give initial feedback within two business days. If this does not happen, feel free to leave a comment. Please keep an eye on how this issue will be labeled, as labels give an overview of priorities, assignments and additional actions requested by the maintainers:
Finally, remember to use https://discuss.ipfs.io if you just need general support.
2022-09-02 triage conversation: can you confirm if this happens with the default key type as well? Please also create a repo with a runnable reproducible case to help with the debugging. Thanks.
Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days.
This issue was closed because it is missing author input.
@BigLep it resolves when using the default key. I have pushed the runnable scenario to the repo - https://github.com/imthe-1/keychain-ipns-sample/tree/master. The console can be checked for the different steps. Please refer to keychain-ipns-sample/src/App.js for the issue specific code sample. let me know if you face any issues.
@BigLep @lidel can you reopen the issue.
@BigLep it resolves when using the default key. I have pushed the runnable scenario to the repo - https://github.com/imthe-1/keychain-ipns-sample/tree/master. The console can be checked for the different steps. Please refer to keychain-ipns-sample/src/App.js for the issue specific code sample. let me know if you face any issues.
@BigLep @lidel please let me know if you need any further info. Also, can you please reopen the issue?
@BigLep in my setup i've used these flags: --enable-pubsub-experiment --enable-namesys-pubsub and have made the following changes in the ipfs config. sharing this so that there are no issues running the sample(you probably dont need that but sharing it anyways:slightly_smiling_face:). please let me know if any further info is required.
"Addresses": {
"Swarm": [
"/ip4/0.0.0.0/tcp/4001/ws",
"/ip6/::/tcp/4001/ws",
"/ip4/0.0.0.0/tcp/4002",
"/ip6/::/tcp/4002",
"/ip4/0.0.0.0/udp/4003/quic",
"/ip6/::/udp/4003/quic"
],
"API": {
"HTTPHeaders": {
"Access-Control-Allow-Origin": [
"http://localhost:3000"
]
}
},
2022-10-07 triage conversation: @imthe-1: do you see these same IPNS issues when you use the default encryption in IPNS? We want to see how much of the issue here is related to your forking and using non-default keys.
@BigLep we have not modified the default encryption and it is the same in our implementation as in the standard ipfs-core. Strangely, in our implementation when we repeat the execution of the deterministic IPNS keypair creation and linking(publishing) flow, the IPNS public key hash resolution works and returns the CID but it fails the first time.
I hope i am able to answer your query, if not please let me know.
Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days.
I removed the stale label since the ball is in the maintainer court currently.
@BigLep please let me know if any further information is required that might help with the troubleshooting and execution.
hi @BigLep can you please share some updates on this?
Hi @imthe-1 : no update on the maintainer side. Everyone is tied up currently traveling or preparing of IPFS Camp next week. Realistically it won't be for a couple of weeks before we have normal triage again and this can be brought up.
Will you by chance be at IPFS Camp?
thanks for the update @BigLep. we tried but couldn't make it this time. we'll be looking forward to attending future IPFS events.
please let me know if any further info is needed from my side once the issue gets picked up.
@BigLep i hope you guys had an awesome event:slightly_smiling_face:...please let me know if any further inputs are required.
@BigLep @lidel @tinytb @mikeal hi guys. we tried posting our issue on the ipfs discussion forum but haven't had much luck there so far. is it possible for you guys to pick it up now?
I'm trying to look into your demo repo but it crashes for me on startup. I've cloned the repo, run npm i
and npm start
and I get:
Starting the development server...
/Users/alex/Documents/Workspaces/imthe-1/keychain-ipns-sample/node_modules/react-scripts/scripts/start.js:19
throw err;
^
Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:71:19)
at Object.createHash (node:crypto:133:10)
at module.exports (/Users/alex/Documents/Workspaces/imthe-1/keychain-ipns-sample/node_modules/webpack/lib/util/createHash.js:135:53)
at NormalModule._initBuildHash (/Users/alex/Documents/Workspaces/imthe-1/keychain-ipns-sample/node_modules/webpack/lib/NormalModule.js:417:16)
at /Users/alex/Documents/Workspaces/imthe-1/keychain-ipns-sample/node_modules/webpack/lib/NormalModule.js:452:10
at /Users/alex/Documents/Workspaces/imthe-1/keychain-ipns-sample/node_modules/webpack/lib/NormalModule.js:323:13
at /Users/alex/Documents/Workspaces/imthe-1/keychain-ipns-sample/node_modules/loader-runner/lib/LoaderRunner.js:367:11
at /Users/alex/Documents/Workspaces/imthe-1/keychain-ipns-sample/node_modules/loader-runner/lib/LoaderRunner.js:233:18
at context.callback (/Users/alex/Documents/Workspaces/imthe-1/keychain-ipns-sample/node_modules/loader-runner/lib/LoaderRunner.js:111:13)
at /Users/alex/Documents/Workspaces/imthe-1/keychain-ipns-sample/node_modules/babel-loader/lib/index.js:59:103 {
opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ],
library: 'digital envelope routines',
reason: 'unsupported',
code: 'ERR_OSSL_EVP_UNSUPPORTED'
}
Node.js v18.12.1
@achingbrain strangely we are seeing this for the first time, seems like an issue with nodejs support for the given module. are you by any chance using nodejs v17.x, if yes you can try this
export NODE_OPTIONS=--openssl-legacy-provider
and then run npm start
you can switch to an LTS version too.
it runs at our side:
console logs:
the 2nd part of the code sample in App.js(line no: 34) the resolution fails as can be seen in the 2nd last line of the logs
you'll have to make one change though: in file 'node_modules/@mdip/client/lib/client/utils/ipfs.js' please change wss
to ws
at line no: 26 depending on the requirement.
I think this is functioning as expected, though it's a little counter-intuitive.
The first call fails but it causes the kubo node to start subscribing to the topic for the IPNS name. The linkToIPNS
function in @mdip/client
updates the ipns name - the kubo node receives the update as part of this which is why it can resolve the record on the second call.
This is similar to the ipns-over-pubsub interop tests.
The version of ipfs-core you have forked from is quite old. Recent versions publish IPNS names on the DHT the same as kubo so don't require ipns-over-pubsub for IPNS to work which should increase the chances of being able to resolve the name on the first try.
@achingbrain this means the first resolve call will always fail making the 2nd subsequent resolve call always succeed? we can try this out and let you know the results. please note, in the code sample in App.js there are two parts each with their own publish and resolve AND publish always happens before resolve in both cases. also, both ipns names are different(please check logs).
Also, we will add our changes to the latest version and try it out. can we raise a PR to ipfs-core if it works fine? What we are trying to achieve is, derive IPNS keypairs by passing in an optional secp256k1 private key(only for secp256k1 class). this will allow for creation of Deterministic IPNS Keypairs and help devs like ourselves gain a little more flexibility in terms of using IPFS/IPNS as the core of our app.
Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days.
@achingbrain @BigLep guys we tried using the latest ipfs-core lib and published to IPNS but the IPNS name did not resolve on the ipfs.io gateway. this was one of the IPNS responses:
{ “name”: “12D3KooWHg8es4oaFD1pmkSaKc5g8oeQc76LhNWeEuPRTx9HR9iS”, “value”: “/ipfs/QmQC3ADe1Yk61VhpYR2BpXFce695h5qXdK31gLDyiQ2QFT” }
we tried using w3name for republishing and it worked after that.
can you please confirm if IPNS is working as expected for you?
@achingbrain coming back to the original issue, as you mentioned the 2nd resolve succeeds because the kubo node has already subscribed to the topic - my problem that still exists is, when i update the IPNS name with a new CID(contains updated data), kubo node still returns the older object when the 1st resolve after publish is called which can cause problems in business logic in our application.
is lifetime
the minimum time after which IPNS record can be updated and resolved safely to the latest value?
how can we implement a mechanism that ensures IPNS names are always resolved to the latest value? if this requires a re-publishing service running every N minutes we can do that.
is it possible to update IPNS record every 30 seconds and also be able to resolve to the latest value? if not every 30 seconds is it possible every 1 minute? this is important because we cannot know how frequently the user will update their data.
Please help us out!
@lidel : I'm sorry to pull you in here, but given the surface area, your expertise may be needed. I'm mostly curious if there are existing known issues we should be pointing to for this one (and then adding weight to fix those).
@imthe-1 it sounds we are discussing multiple issues here, makes it difficult to follow, but hopefully below is useful.
lmk if below understanding is correct:
/ipns/*
that won't look for newer records if it has a cached one that is younger than 1 minute. This minimal caching window overrides any ttl/lifetime you may set up in your record, so in Kubo, for now, you can't get updates for /ipns/
more often than once per minute.@lidel thanks a lot for looking into it!
lmk if below understanding is correct:
(A) seems that your original issue (kubo failing to resolve on the first try, because you only published on PubSub) got resolved when you updated to the latest version of ipfs-code that supports publishing IPNS records on both PubSub and DHT.
- mind confirming this is no longer a problem?
we got to know that since the kubo node has not subscribed to the topic in the first resolve, it fails and then 2nd resolve call succeeds but we are still struggling a bit to get IPNS working.
to share some background, we are publishing to IPNS from the browser and have used this example as reference(browser-ipns-publish). we have modified the ipfs-core
, libp2p
and libp2p-crypto
libraries to accept an optPrivateKey
when creating secp256k1 IPNS keypairs so that detereministic IPNS keypairs can be created, important for our use-case. then we ran into the problem of 1st resolve calls always failing and sometimes when they do resolve(after publish to a new value) they resolve to the older value. to summarise point (A) we are okay with 1st resolve failing and will handle it but not being able to resolve to the latest value is critical for us(apologies for not being able to highlight this properly).
we plan to update the record on IPNS every few minutes in the start of our flow and then the updates happen based on events which is difficult to predict and could be between a gap of few seconds as well.
(B) you published something on IPNS but it does not resolve on ipfs.io gateway
- this is difficult to reason about, too many variables – public gateways have additional layers in front of Kubo. needs smaller repro
as asked by @achingbrain we went back and tested with the latest version of ipfs-core and could not resolve the IPNS name on ipfs.io gateway. we can skip this one since we eventually got it working.
- are you experiencing the same issue when running your own Gateway in LAN and/or remotely somewhere in WAN?
we have not tried this @lidel.
(C) Kubo returns old version of your IPNS records, even after you published an update.
- is this CLI / RPC, or Gateway behavior? (or both?)
the setup is same as ipns-browser-publish example and we tried resolving it using ipfs-http-client
connect to our kubo node so RPC behaviour.
- afaik Kubo (go-namesys to be specific) has a hard-coded cache for
/ipns/*
that won't look for newer records if it has a cached one that is younger than 1 minute. This minimal caching window overrides any ttl/lifetime you may set up in your record, so in Kubo, for now, you can't get updates for/ipns/
more often than once per minute.
one minute is good enough for us and we can implement work-arounds for this but even when i published after a few minutes the resolve call returned the older value only. I tried multiple times as well but it kept returning older values. it is critical for us to have the latest value.
- there is wip work to make this more flexible, and use TTL from IPNS record (if present) – related fixes tracked in fix: honour --ttl flag in 'ipfs name publish' kubo#9471 and max-age, ETag for /ipns requests kubo#1818 (comment) cool!
@BigLep @lidel guys if we can close it this week it will help us tremendously:pray:
@lidel we did try to resolve using our gateway that we setup on a remote instance(coz we really want to get this implemented into our system:slightly_smiling_face:). it resolves for a period of time and then stops resolving kinda intermittent. this is the IPNS name: 16Uiu2HAmEmQ5HoGsx5X7tqeWyow9fZ6LaUV9kvQGAEX6nfRQSaHd
Also, the resolving to older values scenario is still there.
Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days.
@BigLep @lidel please let me know if any other information is required. @BigLep can you please remove the stale
label. thanks!
Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days.
Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days.
Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days.
This issue was closed because it is missing author input.
Reopening - we need to find a way to prevent these kind of items from getting closed. I'll put down in need of maintainer input.
js-ipfs is being deprecated in favor of Helia. You can https://github.com/ipfs/js-ipfs/issues/4336 and read the migration guide.
Please feel to reopen with any comments by 2023-06-02. We will do a final pass on reopened issues afterward (see https://github.com/ipfs/js-ipfs/issues/4336).
Assigning to @achingbrain to answer whether this issue is already resolved in Helia, or if this issue needs to be migrated to that repo!
@helia/ipns supports using secp256k1 keys to publish IPNS records, and also publishing them to DHT peers so this should all work.
@imthe-1 if you are still struggling with this issue can you please port your code to use Helia (see @SgtPooki's comment above for migration guide links) and if it still doesn't work, please open an issue on the @helia/ipns
repo with a link to a repro case and we can continue the conversation there.
Platform: macOS: 12.4 chrome browser: 104.0.5112.101
Subsystem: IPNS
Severity: High
Description:
we have forked the js-ipfs repo and have added the functionality to pass an optional secp256k1 private key for creating custom IPNS keypairs. This allows us to create IPNS keypairs that are created from a child derived from a seed phrase giving more control to the seed phrase. The repos modified are ipfs-core, libp2p, libp2p-crypto. We are using a downgraded version since we faced issues using the latest versions then. We have tried using current latest version as well but faced this issue(4148).
the setup has a browser running ipfs-core and is connected to a go-ipfs(kubo) node so that propagation of IPNS publishing works properly. The IPNS name.publish works properly and returns the public key hash but the name does not resolves when resolved via the go-ipfs node and strangely if the same process is repeated twice we are able to resolve the IPNS name. Due to the modifications we can pass the same private key again on the second name.publish call.
expected result: the IPNS name should resolve on the first call itself.
libp2p-crypto/src/keys/secp256k1-class.js(modified)
creating an IPNS keypair
IPNS publish using the keypair generated above
resolving via the go-ipfs node running on an instance
modified ipfs-core module w/ libp2p and libp2p-crypto changes: https://www.npmjs.com/package/@mdip/ipfs-core