allada / eth-archive-snapshot

Free public Ethereum Archive Snapshot
Apache License 2.0
61 stars 9 forks source link

An error occurred (AccessDenied) when calling the GetObjectAttributes operation: Access Denied #2

Closed seonggwonyoon closed 1 year ago

seonggwonyoon commented 1 year ago

Hello, I cloned the repository according to the guide of this repository and entered the sudo build_archive_node.sh command, but the error below appears and does not proceed.

System Environment

Logs

ubuntu@ip-172-31-0-239:~/eth-archive-snapshot$ sudo ./build_archive_node.sh 
+ [[ 0 -ne 0 ]]
+ mutex_function install_prereq
+ local function_name=install_prereq
+ set -euxo pipefail
+ flock -x 10
+ install_prereq
+ set -euxo pipefail
+ apt update
Hit:1 http://us-west-2.ec2.ports.ubuntu.com/ubuntu-ports jammy InRelease
Hit:2 http://us-west-2.ec2.ports.ubuntu.com/ubuntu-ports jammy-updates InRelease
Hit:3 http://us-west-2.ec2.ports.ubuntu.com/ubuntu-ports jammy-backports InRelease
Hit:4 http://ports.ubuntu.com/ubuntu-ports jammy-security InRelease
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
37 packages can be upgraded. Run 'apt list --upgradable' to see them.
+ DEBIAN_FRONTEND=noninteractive
+ apt install -y zfsutils-linux unzip pv clang-12 make jq python3-boto3 super cmake
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
jq is already the newest version (1.6-2.1ubuntu3).
make is already the newest version (4.3-4.1build1).
pv is already the newest version (1.6.6-1build2).
clang-12 is already the newest version (1:12.0.1-19ubuntu3).
python3-boto3 is already the newest version (1.20.34+dfsg-1).
super is already the newest version (3.30.3-2).
cmake is already the newest version (3.22.1-1ubuntu1.22.04.1).
unzip is already the newest version (6.0-26ubuntu3.1).
zfsutils-linux is already the newest version (2.1.4-0ubuntu0.1).
0 upgraded, 0 newly installed, 0 to remove and 37 not upgraded.
++ which clang-12
+ ln -s /usr/bin/clang-12 /usr/bin/cc
ln: failed to create symbolic link '/usr/bin/cc': File exists
+ true
+ snap install --classic go
snap "go" is already installed, see 'snap help refresh'
+ cargo --version
./build_archive_node.sh: line 61: cargo: command not found
+ curl --proto =https --tlsv1.2 -sSf https://sh.rustup.rs
+ bash /dev/stdin -y
info: downloading installer
info: profile set to 'default'
info: default host triple is aarch64-unknown-linux-gnu
warning: Updating existing toolchain, profile choice will be ignored
info: syncing channel updates for '1.64.0-aarch64-unknown-linux-gnu'
info: default toolchain set to '1.64.0-aarch64-unknown-linux-gnu'

  1.64.0-aarch64-unknown-linux-gnu unchanged - rustc 1.64.0 (a55dd71d5 2022-09-19)

Rust is installed now. Great!

To get started you may need to restart your current shell.
This would reload your PATH environment variable to include
Cargo's bin directory ($HOME/.cargo/bin).

To configure your current shell, run:
source "$HOME/.cargo/env"
+ . /root/.cargo/env
++ case ":${PATH}:" in
++ export PATH=/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
++ PATH=/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
+ mutex_function setup_drives
+ local function_name=setup_drives
+ set -euxo pipefail
+ flock -x 10
+ setup_drives
+ set -euxo pipefail
+ zfs list tank
NAME   USED  AVAIL     REFER  MOUNTPOINT
tank  1.70M  6.69T       96K  none
+ return
+ . /root/.cargo/env
++ case ":${PATH}:" in
++ export PATH=/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
++ PATH=/root/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
+ mutex_function install_zstd
+ local function_name=install_zstd
+ set -euxo pipefail
+ mutex_function install_aws_cli
+ safe_wait
+ BACKGROUND_PIDS=($(jobs -p))
+ local function_name=install_aws_cli
+ set -euxo pipefail
+ mutex_function install_s3pcp
+ local function_name=install_s3pcp
+ mutex_function install_putils
+ set -euxo pipefail
+ local function_name=install_putils
+ mutex_function install_erigon
+ set -euxo pipefail
+ local function_name=install_erigon
+ set -euxo pipefail
+ flock -x 10
++ jobs -p
+ flock -x 10
+ flock -x 10
+ flock -x 10
+ for PID in "${BACKGROUND_PIDS[@]}"
+ wait -f 22606
+ flock -x 10
+ install_zstd
+ set -euxo pipefail
+ pzstd --help
+ install_aws_cli
+ set -euxo pipefail
+ install_s3pcp
+ aws --version
+ set -euxo pipefail
+ s3pcp --help
+ install_putils
+ set -euxo pipefail
+ psplit --help
+ install_erigon
+ set -euxo pipefail
+ erigon --help
psplit 0.1.0
Takes a file or stdin and splits it by -b bytes and sends them to the
OUTPUT_COMMAND in parallel limited by -p concurrent jobs.

Example:

$ printf '012345678901234567890123' | psplit -b 10 sh -c 'cat > /tmp/psplit_$SEQ; echo >>
/tmp/psplit_$SEQ'

Will result in 3 files:
/tmp/psplit_0 = "0123456789
"
/tmp/psplit_1 = "0123456789
"
/tmp/psplit_2 = "0123
"

USAGE:
    psplit [OPTIONS] <OUTPUT_COMMAND>

ARGS:
    <OUTPUT_COMMAND>    Command to run on each output. An environmental variable "SEQ" will be
                        set containing the sequence number of the slice. Data will be sent to
                        stdin of command

OPTIONS:
    -b, --bytes <BYTES>
            Number of bytes per output file [default: 1073741824]

    -h, --help
            Print help information

    -i, --input-file <INPUT_FILE>
            Input file to split. If not set, uses stdin

    -p, --parallel-count <PARALLEL_COUNT>
            Number of commands allowed to run in parallel [default: 16]

    -V, --version
            Print version information
+ pjoin --help
Usage:
  pzstd [args] [FILE(s)]
Parallel ZSTD options:
  -p, --processes   #    : number of threads to use for (de)compression (default:<numcpus>)
ZSTD options:
  -#                     : # compression level (1-19, default:<numcpus>)
  -d, --decompress       : decompression
  -o                file : result stored into `file` (only if 1 input file)
  -f, --force            : overwrite output without prompting, (de)compress links
      --rm               : remove source file(s) after successful (de)compression
  -k, --keep             : preserve source file(s) (default)
  -h, --help             : display help and exit
  -V, --version          : display version number and exit
  -v, --verbose          : verbose mode; specify multiple times to increase log level (default:2)
  -q, --quiet            : suppress warnings; specify twice to suppress errors too
  -c, --stdout           : force write to standard output, even if it is the console
  -r                     : operate recursively on directories
      --ultra            : enable levels beyond 19, up to 22 (requires more memory)
  -C, --check            : integrity check (default)
      --no-check         : no integrity check
  -t, --test             : test compressed file integrity
  --                     : all arguments after "--" are treated as files
+ return
+ for PID in "${BACKGROUND_PIDS[@]}"
+ wait -f 22607
pjoin 0.1.0
Takes stdin commands split by new lines and executes them in parallel and
prints stdout in order based on the order they are in stdin.

Example:

$ pjoin <<'EOT'
sh -c 'printf foo; sleep 1'
printf bar
EOT

Will result in:
'foobar' (without quotes)

USAGE:
    pjoin [OPTIONS] [OUTPUT_FILE]

ARGS:
    <OUTPUT_FILE>    Path to write file. Prints to stdout if not set. Using a file can be faster
                     than stdout

OPTIONS:
    -b, --buffer-size <BUFFER_SIZE>
            Size in bytes of the stdout buffer for reach command [default: 1073741824]

    -h, --help
            Print help information

    -p, --parallel-count <PARALLEL_COUNT>
            Number of commands to run in parallel [default: 16]

    -V, --version
            Print version information
+ return
Downloads data from s3 and sends it to stdout very fast and on a pinned version.
Many concurrent connections to s3 are opened and different chunks of the file
are downloaded in parallel and stiched together using the `pjoin` utility.

USAGE:
    s3pcp [OPTIONS] [S3_PATH]

ARGS:
    S3_PATH    A path to an s3 object. Format: 's3://{bucket}/{key}'

OPTIONS:
    --requester-pays
        If the account downloading is requesting to be the payer for
        the request.

    --region <REGION>
        The region the request should be sent to.

    -p, --parallel-count <PARALLEL_COUNT>
        Number of commands to run in parallel [default: based on computer resources]

    -h, --help
        Print help information
+ return
aws-cli/2.9.13 Python/3.9.11 Linux/5.15.0-1026-aws exe/aarch64.ubuntu.22 prompt/off
+ return
+ for PID in "${BACKGROUND_PIDS[@]}"
+ wait -f 22608
+ for PID in "${BACKGROUND_PIDS[@]}"
+ wait -f 22609
+ for PID in "${BACKGROUND_PIDS[@]}"
+ wait -f 22610
erigon [global options] command [command options] [arguments...]

VERSION:
   2022.09.3-stable-32bd69e5

COMMANDS:
   init                               Bootstrap and initialize a new genesis block
   import                             Import a blockchain file
   snapshots                          
   help                               Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --datadir value                           Data directory for the databases (default: "/root/.local/share/erigon") [<invalid Value>]
   --ethash.dagdir value                     Directory to store the ethash mining DAGs (default: "/root/.local/share/erigon-ethash") [<invalid Value>]
   --snapshots                               Default: use snapshots "true" for BSC, Mainnet and Goerli. use snapshots "false" in all other cases
   --txpool.disable                          experimental external pool and block producer, see ./cmd/txpool/readme.md for more info. Disabling internal txpool and block producer.
   --txpool.locals value                     Comma separated accounts to treat as locals (no flush, priority inclusion)
   --txpool.nolocals                         Disables price exemptions for locally submitted transactions
   --txpool.pricelimit value                 Minimum gas price (fee cap) limit to enforce for acceptance into the pool (default: 1)
   --txpool.pricebump value                  Price bump percentage to replace an already existing transaction (default: 10)
   --txpool.accountslots value               Minimum number of executable transaction slots guaranteed per account (default: 16)
   --txpool.globalslots value                Maximum number of executable transaction slots for all accounts (default: 10000)
   --txpool.globalbasefeeslots value         Maximum number of non-executable transactions where only not enough baseFee (default: 30000)
   --txpool.accountqueue value               Maximum number of non-executable transaction slots permitted per account (default: 64)
   --txpool.globalqueue value                Maximum number of non-executable transaction slots for all accounts (default: 30000)
   --txpool.lifetime value                   Maximum amount of time non-executable transaction are queued (default: 3h0m0s)
   --txpool.trace.senders value              Comma separared list of addresses, whoes transactions will traced in transaction pool with debug printing
   --prune value                             Choose which ancient data delete from DB:
                                             h - prune history (ChangeSets, HistoryIndices - used by historical state access, like eth_getStorageAt, eth_getBalanceAt, debug_traceTransaction, trace_block, trace_transaction, etc.)
                                             r - prune receipts (Receipts, Logs, LogTopicIndex, LogAddressIndex - used by eth_getLogs and similar RPC methods)
                                             t - prune transaction by it's hash index
                                             c - prune call traces (used by trace_filter method)
                                             Does delete data older than 90K blocks, --prune=h is shortcut for: --prune.h.older=90000.
                                             Similarly, --prune=t is shortcut for: --prune.t.older=90000 and --prune=c is shortcut for: --prune.c.older=90000.
                                             However, --prune=r means to prune receipts before the Beacon Chain genesis (Consensus Layer might need receipts after that).
                                             If an item is NOT on the list - means NO pruning for this data.
                                             Example: --prune=htc (default: "disabled")
   --prune.h.older value                     Prune data older than this number of blocks from the tip of the chain (if --prune flag has 'h', then default is 90K) (default: 0)
   --prune.r.older value                     Prune data older than this number of blocks from the tip of the chain (default: 0)
   --prune.t.older value                     Prune data older than this number of blocks from the tip of the chain (if --prune flag has 't', then default is 90K) (default: 0)
   --prune.c.older value                     Prune data older than this number of blocks from the tip of the chain (if --prune flag has 'c', then default is 90K) (default: 0)
   --prune.h.before value                    Prune data before this block (default: 0)
   --prune.r.before value                    Prune data before this block (default: 0)
   --prune.t.before value                    Prune data before this block (default: 0)
   --prune.c.before value                    Prune data before this block (default: 0)
   --batchSize value                         Batch size for the execution stage (default: "256M")
   --blockDownloaderWindow value             Outstanding limit of block bodies being downloaded (default: 32768)
   --database.verbosity value                Enabling internal db logs. Very high verbosity levels may require recompile db. Default: 2, means warning. (default: 2)
   --private.api.addr value                  private api network address, for example: 127.0.0.1:9090, empty string means not to start the listener. do not expose to public network. serves remote database interface (default: "127.0.0.1:9090")
   --private.api.ratelimit value             Amount of requests server handle simultaneously - requests over this limit will wait. Increase it - if clients see 'request timeout' while server load is low - it means your 'hot data' is small or have much RAM.  (default: 31872)
   --etl.bufferSize value                    Buffer size for ETL operations. (default: "256MB")
   --tls                                     Enable TLS handshake
   --tls.cert value                          Specify certificate
   --tls.key value                           Specify key file
   --tls.cacert value                        Specify certificate authority
   --state.stream.disable                    Disable streaming of state changes from core to RPC daemon
   --sync.loop.throttle value                Sets the minimum time between sync loop starts (e.g. 1h30m, default is none)
   --bad.block value                         Marks block with given hex string as bad and forces initial reorg before normal staged sync
   --http                                    HTTP-RPC server (enabled by default). Use --http=false to disable it
   --http.addr value                         HTTP-RPC server listening interface (default: "localhost")
   --http.port value                         HTTP-RPC server listening port (default: 8545)
   --authrpc.addr value                      HTTP-RPC server listening interface for the Engine API (default: "localhost")
   --authrpc.port value                      HTTP-RPC server listening port for the Engine API (default: 8551)
   --authrpc.jwtsecret value                 Path to the token that ensures safe connection between CL and EL
   --http.compression                        Enable compression over HTTP-RPC
   --http.corsdomain value                   Comma separated list of domains from which to accept cross origin requests (browser enforced)
   --http.vhosts value                       Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard. (default: "localhost")
   --authrpc.vhosts value                    Comma separated list of virtual hostnames from which to accept Engine API requests (server enforced). Accepts '*' wildcard. (default: "localhost")
   --http.api value                          API's offered over the HTTP-RPC interface (default: "eth,erigon,engine")
   --ws                                      Enable the WS-RPC server
   --ws.compression                          Enable compression over WebSocket
   --http.trace                              Trace HTTP requests with INFO level
   --state.cache value                       Amount of keys to store in StateCache (enabled if no --datadir set). Set 0 to disable StateCache. 1_000_000 keys ~ equal to 2Gb RAM (maybe we will add RAM accounting in future versions). (default: 1000000)
   --rpc.batch.concurrency value             Does limit amount of goroutines to process 1 batch request. Means 1 bach request can't overload server. 1 batch still can have unlimited amount of request (default: 2)
   --rpc.streaming.disable                   Erigon has enalbed json streaming for some heavy endpoints (like trace_*). It's treadoff: greatly reduce amount of RAM (in some cases from 30GB to 30mb), but it produce invalid json format if error happened in the middle of streaming (because json is not streaming-friendly format)
   --db.read.concurrency value               Does limit amount of parallel db reads. Default: equal to GOMAXPROCS (or number of CPU) (default: 32)
   --rpc.accessList value                    Specify granular (method-by-method) API allowlist
   --trace.compat                            Bug for bug compatibility with OE for trace_ routines
   --rpc.gascap value                        Sets a cap on gas that can be used in eth_call/estimateGas (default: 50000000)
   --experimental.overlay                    Enables In-Memory Overlay for PoS
   --txpool.api.addr value                   txpool api network address, for example: 127.0.0.1:9090 (default: use value of --private.api.addr)
   --trace.maxtraces value                   Sets a limit on traces that can be returned in trace_filter (default: 200)
   --http.timeouts.read value                Maximum duration for reading the entire request, including the body. (default: 30s)
   --http.timeouts.write value               Maximum duration before timing out writes of the response. It is reset whenever a new request's header is read. (default: 30m0s)
   --http.timeouts.idle value                Maximum amount of time to wait for the next request when keep-alives are enabled. If http.timeouts.idle is zero, the value of http.timeouts.read is used. (default: 2m0s)
   --authrpc.timeouts.read value             Maximum duration for reading the entire request, including the body. (default: 30s)
   --authrpc.timeouts.write value            Maximum duration before timing out writes of the response. It is reset whenever a new request's header is read. (default: 30m0s)
   --authrpc.timeouts.idle value             Maximum amount of time to wait for the next request when keep-alives are enabled. If authrpc.timeouts.idle is zero, the value of authrpc.timeouts.read is used. (default: 2m0s)
   --rpc.evmtimeout value                    Maximum amount of time to wait for the answer from EVM call. (default: 5m0s)
   --snap.keepblocks                         Keep ancient blocks in db (useful for debug)
   --snap.stop                               Workaround to stop producing new snapshots, if you meet some snapshots-related critical bug
   --db.pagesize value                       set mdbx pagesize on db creation: must be power of 2 and '256b <= pagesize <= 64kb'. default: equal to OperationSystem's pageSize (default: "4KB")
   --torrent.port value                      port to listen and serve BitTorrent protocol (default: 42069)
   --torrent.maxpeers value                  unused parameter (reserved for future use) (default: 100)
   --torrent.conns.perfile value             connections per file (default: 10)
   --torrent.download.slots value            amount of files to download in parallel. If network has enough seeders 1-3 slot enough, if network has lack of seeders increase to 5-7 (too big value will slow down everything). (default: 3)
   --torrent.upload.rate value               bytes per second, example: 32mb (default: "4mb")
   --torrent.download.rate value             bytes per second, example: 32mb (default: "16mb")
   --torrent.verbosity value                 0=silent, 1=error, 2=warn, 3=info, 4=debug, 5=detail (must set --verbosity to equal or higher level and has defeault: 3) (default: 2)
   --port value                              Network listening port (default: 30303)
   --p2p.protocol value                      Version of eth p2p protocol (default: 66)
   --nat value                               NAT port mapping mechanism (any|none|upnp|pmp|stun|extip:<IP>)
                                                  "" or "none"         default - do not nat
                                                  "extip:77.12.33.4"   will assume the local machine is reachable on the given IP
                                                  "any"                uses the first auto-detected mechanism
                                                  "upnp"               uses the Universal Plug and Play protocol
                                                  "pmp"                uses NAT-PMP with an auto-detected gateway address
                                                  "pmp:192.168.0.1"    uses NAT-PMP with the given gateway address
                                                  "stun"               uses STUN to detect an external IP using a default server
                                                  "stun:<server>"      uses STUN to detect an external IP using the given server (host:port)
   --nodiscover                              Disables the peer discovery mechanism (manual peer addition)
   --v5disc                                  Enables the experimental RLPx V5 (Topic Discovery) mechanism
   --netrestrict value                       Restricts network communication to the given IP networks (CIDR masks)
   --nodekey value                           P2P node key file
   --nodekeyhex value                        P2P node key as hex (for testing)
   --discovery.dns value                     Sets DNS discovery entry points (use "" to disable DNS)
   --bootnodes value                         Comma separated enode URLs for P2P discovery bootstrap
   --staticpeers value                       Comma separated enode URLs to connect to
   --trustedpeers value                      Comma separated enode URLs which are always allowed to connect, even above the peer limit
   --maxpeers value                          Maximum number of network peers (network disabled if set to 0) (default: 100)
   --chain value                             Name of the testnet to join (default: "mainnet")
   --dev.period value                        Block period to use in developer mode (0 = mine only if transaction pending) (default: 0)
   --vmdebug                                 Record information useful for VM and contract debugging
   --networkid value                         Explicitly set network id (integer)(For testnets: use --chain <testnet_name> instead) (default: 1)
   --fakepow                                 Disables proof-of-work verification
   --gpo.blocks value                        Number of recent blocks to check for gas prices (default: 20)
   --gpo.percentile value                    Suggested gas price is the given percentile of a set of recent transaction gas prices (default: 60)
   --allow-insecure-unlock                   Allow insecure account unlocking when account-related RPCs are exposed by http
   --metrics                                 Enable metrics collection and reporting
   --metrics.expensive                       Enable expensive metrics collection and reporting
   --metrics.addr value                      Enable stand-alone metrics HTTP server listening interface (default: "127.0.0.1")
   --metrics.port value                      Metrics HTTP server listening port (default: 6060)
   --experimental.history.v2                 Not recommended, experimental: Can't change this flag after node creation. New DB and Snapshots format of history allows: parallel blocks execution, get state as of given transaction without executing whole block.
   --identity value                          Custom node name
   --clique.checkpoint value                 number of blocks after which to save the vote snapshot to the database (default: 10)
   --clique.snapshots value                  number of recent vote snapshots to keep in memory (default: 1024)
   --clique.signatures value                 number of recent block signatures to keep in memory (default: 16384)
   --clique.datadir value                    a path to clique db folder [<invalid Value>]
   --watch-the-burn                          Enable WatchTheBurn stage to keep track of ETH issuance
   --mine                                    Enable mining
   --proposer.disable                        Disables PoS proposer
   --miner.notify value                      Comma separated HTTP URL list to notify of new work packages
   --miner.gaslimit value                    Target gas limit for mined blocks (default: 30000000)
   --miner.etherbase value                   Public address for block mining rewards (default: "0")
   --miner.extradata value                   Block extra data set by the miner (default = client version)
   --miner.noverify                          Disable remote sealing verification
   --miner.sigfile value                     Private key to sign blocks with
   --sentry.api.addr value                   comma separated sentry addresses '<host>:<port>,<host>:<port>'
   --sentry.log-peer-info                    Log detailed peer info when a peer connects or disconnects. Enable to integrate with observer.
   --downloader.api.addr value               downloader address '<host>:<port>'
   --no-downloader                           to disable downloader component
   --downloader.verify                       verify snapshots on startup. it will not report founded problems but just re-download broken pieces
   --healthcheck                             Enable grpc health check
   --bor.heimdall value                      URL of Heimdall service (default: "http://localhost:1317")
   --bor.withoutheimdall                     Run without Heimdall service (for testing purpose)
   --ethstats value                          Reporting URL of a ethstats service (nodename:secret@host:port)
   --override.terminaltotaldifficulty value  Manually specify TerminalTotalDifficulty, overriding the bundled setting (default: <nil>) [<invalid Value>]
   --override.mergeNetsplitBlock value       Manually specify FORK_NEXT_VALUE (see EIP-3675), overriding the bundled setting (default: <nil>) [<invalid Value>]
   --config value                            Sets erigon flags from YAML/TOML file
   --verbosity value                         Logging verbosity: 0=silent, 1=error, 2=warn, 3=info, 4=debug, 5=detail (default: 3)
   --log.json                                Format logs with JSON
   --pprof                                   Enable the pprof HTTP server
   --pprof.addr value                        pprof HTTP server listening interface (default: "127.0.0.1")
   --pprof.port value                        pprof HTTP server listening port (default: 6060)
   --pprof.cpuprofile value                  Write CPU profile to the given file
   --trace value                             Write execution trace to the given file
   --help, -h                                show help
   --version, -v                             print the version

+ return
+ mutex_function prepare_zfs_datasets
+ local function_name=prepare_zfs_datasets
+ set -euxo pipefail
+ flock -x 10
+ prepare_zfs_datasets
+ set -euxo pipefail
+ zfs create -o mountpoint=/erigon/data tank/erigon_data
cannot create 'tank/erigon_data': dataset already exists
+ true
+ zfs create -o mountpoint=/erigon/data/eth tank/erigon_data/eth
cannot create 'tank/erigon_data/eth': dataset already exists
+ true
+ mutex_function setup_and_download_lighthouse_snapshot
+ safe_wait
+ BACKGROUND_PIDS=($(jobs -p))
+ mutex_function download_snapshots
+ local function_name=setup_and_download_lighthouse_snapshot
+ set -euxo pipefail
+ mutex_function download_nodes
+ local function_name=download_snapshots
+ set -euxo pipefail
+ mutex_function download_database_file
+ local function_name=download_nodes
+ set -euxo pipefail
+ local function_name=download_database_file
+ set -euxo pipefail
++ jobs -p
+ flock -x 10
+ flock -x 10
+ flock -x 10
+ for PID in "${BACKGROUND_PIDS[@]}"
+ wait -f 22654
+ flock -x 10
+ setup_and_download_lighthouse_snapshot
+ set -euxo pipefail
+ export LIGHTHOUSE_WITH_ERIGON=1
+ LIGHTHOUSE_WITH_ERIGON=1
+ download_snapshots
+ set -euxo pipefail
+ zfs list tank/erigon_data/eth/snapshots
+ download_nodes
+ . /dev/fd/63
+ set -euxo pipefail
+ zfs list tank/erigon_data/eth/nodes
+ download_database_file
+ set -euxo pipefail
+ zfs list tank/erigon_data/eth/chaindata
++ curl https://raw.githubusercontent.com/allada/lighthouse-beacon-snapshot/master/build_lighthouse_beacon_node.sh
NAME                             USED  AVAIL     REFER  MOUNTPOINT
tank/erigon_data/eth/snapshots    96K  6.69T       96K  /erigon/data/eth/snapshots
NAME                         USED  AVAIL     REFER  MOUNTPOINT
tank/erigon_data/eth/nodes    96K  6.69T       96K  /erigon/data/eth/nodes
NAME                             USED  AVAIL     REFER  MOUNTPOINT
tank/erigon_data/eth/chaindata    96K  6.69T       96K  /erigon/data/eth/chaindata
+ mkdir -p /erigon/data/eth/snapshots/
+ aws s3 sync --quiet --request-payer requester s3://public-blockchain-snapshots/eth/erigon-nodes-folder-latest/ /erigon/data/eth/nodes/
+ return
  % Total    % Received+ aws s3 sync --quiet --request-payer requester s3://public-blockchain-snapshots/eth/erigon-snapshots-folder-latest/ /erigon/data/eth/snapshots/
 % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  8412  100  8412    0     0  87296      0 --:--:-- --:--:-- --:--:-- 87625
++ set -euxo pipefail
++ [[ 0 -ne 0 ]]
++ mutex_function install_prereq
++ local function_name=install_prereq
++ set -euxo pipefail
++ flock -x 11
++ install_prereq
++ set -euxo pipefail
++ apt update
Hit:1 http://us-west-2.ec2.ports.ubuntu.com/ubuntu-ports jammy InRelease
Hit:2 http://us-west-2.ec2.ports.ubuntu.com/ubuntu-ports jammy-updates InRelease
Hit:3 http://us-west-2.ec2.ports.ubuntu.com/ubuntu-ports jammy-backports InRelease
Hit:4 http://ports.ubuntu.com/ubuntu-ports jammy-security InRelease
+ trueg package lists... 7%
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
37 packages can be upgraded. Run 'apt list --upgradable' to see them.
++ DEBIAN_FRONTEND=noninteractive
++ apt install -y zfsutils-linux unzip pv jq make clang-12 cmake super protobuf-compiler
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
jq is already the newest version (1.6-2.1ubuntu3).
make is already the newest version (4.3-4.1build1).
pv is already the newest version (1.6.6-1build2).
clang-12 is already the newest version (1:12.0.1-19ubuntu3).
protobuf-compiler is already the newest version (3.12.4-1ubuntu7).
super is already the newest version (3.30.3-2).
cmake is already the newest version (3.22.1-1ubuntu1.22.04.1).
unzip is already the newest version (6.0-26ubuntu3.1).
zfsutils-linux is already the newest version (2.1.4-0ubuntu0.1).
0 upgraded, 0 newly installed, 0 to remove and 37 not upgraded.
+++ which clang-12
++ ln -s /usr/bin/clang-12 /usr/bin/cc
ln: failed to create symbolic link '/usr/bin/cc': File exists
++ true
++ cargo --version
++ mutex_function setup_drives
++ local function_name=setup_drives
++ set -euxo pipefail
++ flock -x 11
++ setup_drives
++ set -euxo pipefail
++ zfs list tank
NAME   USED  AVAIL     REFER  MOUNTPOINT
tank  1.70M  6.69T       96K  none
++ return
++ source /root/.cargo/env
+++ case ":${PATH}:" in
++ safe_wait
++ mutex_function install_zstd
++ BACKGROUND_PIDS=($(jobs -p))
++ mutex_function install_aws_cli
++ local function_name=install_zstd
++ set -euxo pipefail
++ local function_name=install_aws_cli
++ mutex_function install_s3pcp
++ set -euxo pipefail
++ local function_name=install_s3pcp
++ set -euxo pipefail
+++ jobs -p
++ flock -x 11
++ flock -x 11
++ for PID in "${BACKGROUND_PIDS[@]}"
++ wait -f 22999
++ flock -x 11
++ install_zstd
++ set -euxo pipefail
++ install_aws_cli
++ pzstd --help
++ set -euxo pipefail
++ aws --version
++ install_s3pcp
++ set -euxo pipefail
++ s3pcp --help
Usage:
  pzstd [args] [FILE(s)]
Parallel ZSTD options:
  -p, --processes   #    : number of threads to use for (de)compression (default:<numcpus>)
ZSTD options:
  -#                     : # compression level (1-19, default:<numcpus>)
  -d, --decompress       : decompression
  -o                file : result stored into `file` (only if 1 input file)
  -f, --force            : overwrite output without prompting, (de)compress links
      --rm               : remove source file(s) after successful (de)compression
  -k, --keep             : preserve source file(s) (default)
  -h, --help             : display help and exit
  -V, --version          : display version number and exit
  -v, --verbose          : verbose mode; specify multiple times to increase log level (default:2)
  -q, --quiet            : suppress warnings; specify twice to suppress errors too
  -c, --stdout           : force write to standard output, even if it is the console
  -r                     : operate recursively on directories
      --ultra            : enable levels beyond 19, up to 22 (requires more memory)
  -C, --check            : integrity check (default)
      --no-check         : no integrity check
  -t, --test             : test compressed file integrity
  --                     : all arguments after "--" are treated as files
++ return
++ for PID in "${BACKGROUND_PIDS[@]}"
++ wait -f 23000
Downloads data from s3 and sends it to stdout very fast and on a pinned version.
Many concurrent connections to s3 are opened and different chunks of the file
are downloaded in parallel and stiched together using the `pjoin` utility.

USAGE:
    s3pcp [OPTIONS] [S3_PATH]

ARGS:
    S3_PATH    A path to an s3 object. Format: 's3://{bucket}/{key}'

OPTIONS:
    --requester-pays
        If the account downloading is requesting to be the payer for
        the request.

    --region <REGION>
        The region the request should be sent to.

    -p, --parallel-count <PARALLEL_COUNT>
        Number of commands to run in parallel [default: based on computer resources]

    -h, --help
        Print help information
++ return
aws-cli/2.9.13 Python/3.9.11 Linux/5.15.0-1026-aws exe/aarch64.ubuntu.22 prompt/off
++ return
++ for PID in "${BACKGROUND_PIDS[@]}"
++ wait -f 23001
++ safe_wait
++ BACKGROUND_PIDS=($(jobs -p))
++ mutex_function download_snapshot
++ local function_name=download_snapshot
++ set -euxo pipefail
++ mutex_function install_lighthouse
++ local function_name=install_lighthouse
++ set -euxo pipefail
+++ jobs -p
++ flock -x 11
++ for PID in "${BACKGROUND_PIDS[@]}"
++ wait -f 23016
++ flock -x 11
++ download_snapshot
++ set -euxo pipefail
++ install_lighthouse
++ zfs create -o mountpoint=none tank/lighthouse
++ set -euxo pipefail
++ lighthouse --help
/dev/fd/63: line 144: lighthouse: command not found
++ mkdir -p /lighthouse
++ cd /lighthouse
++ git clone https://github.com/sigp/lighthouse.git
fatal: destination path 'lighthouse' already exists and is not an empty directory.
cannot create 'tank/lighthouse': dataset already exists
++ true
++ zfs create -o mountpoint=none tank/lighthouse/data
cannot create 'tank/lighthouse/data': dataset already exists
++ true
++ zfs create -o mountpoint=/lighthouse/data/mainnet tank/lighthouse/data/mainnet
cannot create 'tank/lighthouse/data/mainnet': dataset already exists
++ true
++ mkdir -p /lighthouse/data/mainnet/beacon/
++ cd /lighthouse/data/mainnet/beacon/
++ s3pcp --requester-pays s3://public-blockchain-snapshots/lighthouse/mainnet/beacon/snapshot.tar.zstd -
++ pv
++ pzstd -d
++ tar xf -

An error occurred (AccessDenied) when calling the GetObjectAttributes operation: Access Denied
0.00 B 0:00:00 [0.00 B/s] [<=>                                                                                                            ]
tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors
seonggwonyoon commented 1 year ago

Solved by setting aws credentials in the /root/.aws folder.

allada commented 1 year ago

Yeah, i haven't got around to updating this repo's script. The BSC script tells user if they don't have credentials.