dyne / reflow-os

Base scripts to run Reflow OS
7 stars 2 forks source link

Local deployment on Mac fails #30

Open sbocconi opened 2 years ago

sbocconi commented 2 years ago

I am trying to launch a local node with the procedure described int the readme.

From a fresh git clone, cd reflow-os and make config setup I get:

Shell output:

12:51:47.068 [error] Postgrex.Protocol (#PID<0.137.0>) failed to connect: ** (DBConnection.ConnectionError) tcp recv: closed
12:51:47.068 [error] Postgrex.Protocol (#PID<0.136.0>) failed to connect: ** (DBConnection.ConnectionError) tcp recv: closed
12:51:48.249 [error] Postgrex.Protocol (#PID<0.137.0>) failed to connect: ** (Postgrex.Error) FATAL 57P03 (cannot_connect_now) the database system is in recovery mode
12:51:49.512 [error] Postgrex.Protocol (#PID<0.136.0>) failed to connect: ** (Postgrex.Error) FATAL 57P03 (cannot_connect_now) the database system is in recovery mode
12:51:49.925 [error] Could not create schema migrations table. This error usually happens due to the following:

  * The database does not exist
  * The "schema_migrations" table, which Ecto uses for managing
    migrations, was defined by another library
  * There is a deadlock while migrating (such as using concurrent
    indexes with a migration_lock)

To fix the first issue, run "mix ecto.create".

To address the second, you can run "mix ecto.drop" followed by
"mix ecto.create". Alternatively you may configure Ecto to use
another table and/or repository for managing migrations:

    config :bonfire, Bonfire.Repo,
      migration_source: "some_other_table_for_schema_migrations",
      migration_repo: AnotherRepoForSchemaMigrations

The full error report is shown below.

** (DBConnection.ConnectionError) connection not available and request was dropped from queue after 2865ms. This means requests are coming in and your connection pool cannot serve them fast enough. You can address this by:

  1. Ensuring your database is available and that you can connect to it
  2. Tracking down slow queries and making sure they are running fast enough
  3. Increasing the pool_size (although this increases resource consumption)
  4. Allowing requests to wait longer by increasing :queue_target and :queue_interval

See DBConnection.start_link/2 for more information

    (ecto_sql 3.7.1) lib/ecto/adapters/sql.ex:760: Ecto.Adapters.SQL.raise_sql_call_error/1
    (elixir 1.12.3) lib/enum.ex:1582: Enum."-map/2-lists^map/1-0-"/2
    (ecto_sql 3.7.1) lib/ecto/adapters/sql.ex:852: Ecto.Adapters.SQL.execute_ddl/4
    (ecto_sql 3.7.1) lib/ecto/migrator.ex:678: Ecto.Migrator.verbose_schema_migration/3
    (ecto_sql 3.7.1) lib/ecto/migrator.ex:504: Ecto.Migrator.lock_for_migrations/4
    (ecto_sql 3.7.1) lib/ecto/migrator.ex:419: Ecto.Migrator.run/4
    (ecto_sql 3.7.1) lib/ecto/migrator.ex:146: Ecto.Migrator.with_repo/3
    lib/release_tasks.ex:7: EctoSparkles.ReleaseTasks.migrate/1
make[1]: *** [rel.setup] Error 1
make: *** [setup] Error 2

In the console of the DB container I get an endless repetition of the following log outputs:

2022-02-08 13:05:42.249 UTC [1] LOG:  database system is ready to accept connections
2022-02-08 13:05:42.251 UTC [3793] FATAL:  could not open file "global/pg_filenode.map": No such file or directory
2022-02-08 13:05:42.255 UTC [3791] FATAL:  could not open file "global/pg_filenode.map": No such file or directory
2022-02-08 13:05:42.258 UTC [1] LOG:  background worker "logical replication launcher" (PID 3793) exited with exit code 1
2022-02-08 13:05:42.258 UTC [1] LOG:  autovacuum launcher process (PID 3791) exited with exit code 1
2022-02-08 13:05:42.258 UTC [1] LOG:  terminating any other active server processes
2022-02-08 13:05:42.272 UTC [1] LOG:  all server processes terminated; reinitializing
2022-02-08 13:05:42.311 UTC [3794] LOG:  database system was interrupted; last known up at 2022-02-08 13:05:42 UTC
2022-02-08 13:05:43.950 UTC [3794] LOG:  database system was not properly shut down; automatic recovery in progress
2022-02-08 13:05:43.958 UTC [3794] LOG:  invalid record length at 0/100FD78: wanted 24, got 0
2022-02-08 13:05:43.958 UTC [3794] LOG:  redo is not required

The Search container shows:


888b     d888          d8b 888 d8b  .d8888b.                                    888
8888b   d8888          Y8P 888 Y8P d88P  Y88b                                   888
88888b.d88888              888     Y88b.                                        888
888Y88888P888  .d88b.  888 888 888  "Y888b.    .d88b.   8888b.  888d888 .d8888b 88888b.
888 Y888P 888 d8P  Y8b 888 888 888     "Y88b. d8P  Y8b     "88b 888P"  d88P"    888 "88b
888  Y8P  888 88888888 888 888 888       "888 88888888 .d888888 888    888      888  888
888   "   888 Y8b.     888 888 888 Y88b  d88P Y8b.     888  888 888    Y88b.    888  888
888       888  "Y8888  888 888 888  "Y8888P"   "Y8888  "Y888888 888     "Y8888P 888  888

Database path:      "./data.ms"
Server listening on:    "http://0.0.0.0:7700"
Environment:        "development"
Commit SHA:     "unknown"
Commit date:        "unknown"
Package version:    "0.25.2"

Thank you for using MeiliSearch!

We collect anonymized analytics to improve our product and your experience. To learn more, including how to turn off analytics, visit our dedicated documentation page: https://docs.meilisearch.com/learn/what_is_meilisearch/telemetry.html

Anonymous telemetry:    "Enabled"
Instance UID:       "cd65a02d-aab4-4456-bac2-07bba63877c4"

A Master Key has been set. Requests to MeiliSearch won't be authorized unless you provide an authentication key.

Documentation:      https://docs.meilisearch.com
Source code:        https://github.com/meilisearch/meilisearch
Contact:        https://docs.meilisearch.com/resources/contact.html

[2022-02-08T12:50:06Z INFO  actix_server::builder] Starting 2 workers
[2022-02-08T12:50:06Z INFO  actix_server::server] Actix runtime found. Starting in Actix runtime