getavalon / docker

Avalon in a Dockerfile
MIT License
5 stars 2 forks source link

Persistent Database #9

Closed tokejepsen closed 6 years ago

tokejepsen commented 6 years ago

Goal

Option for referencing a persistent database on disk.

Motivation

In order to use the Docker distribution for production, we'll need a persistent database so users don't have to recreate their projects every time the Docker container is restarted.

Implementation

A simple flag to the docker container, reflecting the mongod flag --dbpath, could be an option.

mottosso commented 6 years ago

Let me know what you think of this.

$ docker run -d --name avalon -v avalon-db:/data/db -p 445:445 -p 27017:27017 getavalon/docker:0.2

Volume automatically created on first-run, and persists across runs.

$ docker volume ls
avalon-db

Also works consistently on all platforms, including Windows.

tokejepsen commented 6 years ago

Cool, that works!

I would like though to be able to backup the database on the network, so I would like to know where the database is stored. Guessing its internals of Docker while being a volume and then you need to some how mount that volume to the host machine?

mottosso commented 6 years ago

You can export the database via mongodump. Would not recommend backing up the raw files; as they're mostly temporary memory (and much larger, in the multi-gigabyte range, even with a tiny actual database).

Otherwise, you could expose the files via something like Samba, and get to them that way. Might be a Docker tool for exporting the volume itself as well, but I haven't checked.

tokejepsen commented 6 years ago

I seem to get an error when having run the command to persist the database, then after killing the container, starting a new container with persisting the database.

$ docker run --name avalon --rm -p 445:445 -p 27017:27017 getavalon/docker:0.2
Added user avalon.
2018-06-08T11:20:21.254+0000 I CONTROL  [initandlisten] MongoDB starting : pid=11 port=27017 dbpath=/data/db 64-bit host=be7e0c3bfaaf
2018-06-08T11:20:21.255+0000 I CONTROL  [initandlisten] db version v3.6.4
2018-06-08T11:20:21.255+0000 I CONTROL  [initandlisten] git version: d0181a711f7e7f39e60b5aeb1dc7097bf6ae5856
2018-06-08T11:20:21.256+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014
2018-06-08T11:20:21.256+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2018-06-08T11:20:21.257+0000 I CONTROL  [initandlisten] modules: none
2018-06-08T11:20:21.257+0000 I CONTROL  [initandlisten] build environment:
2018-06-08T11:20:21.257+0000 I CONTROL  [initandlisten]     distmod: ubuntu1404
2018-06-08T11:20:21.258+0000 I CONTROL  [initandlisten]     distarch: x86_64
2018-06-08T11:20:21.260+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2018-06-08T11:20:21.260+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true } }
2018-06-08T11:20:21.261+0000 I STORAGE  [initandlisten]
2018-06-08T11:20:21.262+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2018-06-08T11:20:21.263+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2018-06-08T11:20:21.264+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=256M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),cache_cursors=false,log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2018-06-08T11:20:23.098+0000 I STORAGE  [initandlisten] WiredTiger message [1528456823:98625][11:0x7f6c6ef31a00], txn-recover: Set global recovery timestamp: 0
[2018/06/08 11:20:24.330236,  0] ../lib/util/become_daemon.c:124(daemon_ready)
  STATUS=daemon 'smbd' finished starting up and ready to serve connections
[2018/06/08 11:20:24.333117,  0] ../lib/tdb_wrap/tdb_wrap.c:64(tdb_wrap_log)
  tdb(/var/run/samba/serverid.tdb): expand_file write of 4096 bytes failed (No space left on device)
[2018/06/08 11:20:24.335012,  0] ../source3/smbd/server.c:892(open_sockets_smbd)
  open_sockets_smbd: Failed to register myself in serverid.tdb
[2018/06/08 11:20:24.454661,  0] ../source3/lib/util.c:789(smb_panic_s3)
  PANIC (pid 10): open_sockets_smbd() failed
2018-06-08T11:20:24.494+0000 E STORAGE  [initandlisten] WiredTiger error (28) [1528456824:494707][11:0x7f6c6ef31a00], WT_SESSION.create: /data/db/_mdb_catalog.wt: handle-write: pwrite: failed to write 4096 bytes at offset 0: No space left on device
2018-06-08T11:20:24.866+0000 F -        [initandlisten] Fatal assertion 28520 UnknownError: 28: No space left on device at src/mongo/db/storage/kv/kv_storage_engine.cpp 103
2018-06-08T11:20:24.868+0000 F -        [initandlisten]

***aborting after fassert() failure

[2018/06/08 11:20:24.910252,  0] ../source3/lib/util.c:900(log_stack_trace)
  BACKTRACE: 9 stack frames:
   #0 /usr/lib/x86_64-linux-gnu/samba/libsmbregistry.so.0(log_stack_trace+0x1a) [0x7feab9e331da]
   #1 /usr/lib/x86_64-linux-gnu/samba/libsmbregistry.so.0(smb_panic_s3+0x20) [0x7feab9e332b0]
   #2 /usr/lib/x86_64-linux-gnu/libsamba-util.so.0(smb_panic+0x2f) [0x7feababaa8df]
   #3 /usr/lib/x86_64-linux-gnu/samba/libsmbd-base.so.0(+0x14d475) [0x7feaba7ae475]
   #4 /usr/lib/x86_64-linux-gnu/samba/libsmbd-base.so.0(+0x14d751) [0x7feaba7ae751]
   #5 /usr/lib/x86_64-linux-gnu/samba/libsmbd-shim.so.0(exit_server+0x12) [0x7feab868bc82]
   #6 /usr/sbin/smbd(main+0x1133) [0x561b8486e243]
   #7 /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7feab7375f45]
   #8 /usr/sbin/smbd(+0x7a96) [0x561b8486ea96]
[2018/06/08 11:20:24.917400,  0] ../source3/lib/util.c:801(smb_panic_s3)
  smb_panic(): calling panic action [/usr/share/samba/panic-action 10]
2018-06-08T11:20:24.988+0000 F -        [initandlisten] Got signal: 6 (Aborted).

 0x55d564df8781 0x55d564df7999 0x55d564df7e7d 0x7f6c6d98e330 0x7f6c6d5ebc37 0x7f6c6d5ef028 0x55d56355963d 0x55d56376e5ac 0x55d5635edf1a 0x55d5637e2aa7 0x55d5634f2027 0x55d5635cd0cc 0x55d56355b029 0x7f6c6d5d6f45 0x55d5635bc9df
----- BEGIN BACKTRACE -----
{"backtrace":[{"b":"55D562BCA000","o":"222E781","s":"_ZN5mongo15printStackTraceERSo"},{"b":"55D562BCA000","o":"222D999"},{"b":"55D562BCA000","o":"222DE7D"},{"b":"7F6C6D97E000","o":"10330"},{"b":"7F6C6D5B5000","o":"36C37","s":"gsignal"},{"b":"7F6C6D5B5000","o":"3A028","s":"abort"},{"b":"55D562BCA000","o":"98F63D","s":"_ZN5mongo42fassertFailedWithStatusNoTraceWithLocationEiRKNS_6StatusEPKcj"},{"b":"55D562BCA000","o":"BA45AC","s":"_ZN5mongo15KVStorageEngineC2EPNS_8KVEngineERKNS_22KVStorageEngineOptionsESt8functionIFSt10unique_ptrINS_26KVDatabaseCatalogEntryBaseESt14default_deleteIS8_EENS_10StringDataEPS0_EE"},{"b":"55D562BCA000","o":"A23F1A"},{"b":"55D562BCA000","o":"C18AA7","s":"_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEngineEv"},{"b":"55D562BCA000","o":"928027"},{"b":"55D562BCA000","o":"A030CC","s":"_ZN5mongo11mongoDbMainEiPPcS1_"},{"b":"55D562BCA000","o":"991029","s":"main"},{"b":"7F6C6D5B5000","o":"21F45","s":"__libc_start_main"},{"b":"55D562BCA000","o":"9F29DF"}],"processInfo":{ "mongodbVersion" : "3.6.4", "gitVersion" : "d0181a711f7e7f39e60b5aeb1dc7097bf6ae5856", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "4.4.115-boot2docker", "version" : "#1 SMP Thu Feb 8 17:36:45 UTC 2018", "machine" : "x86_64" }, "somap" : [ { "b" : "55D562BCA000", "elfType" : 3, "buildId" : "70D6733FD632814467C815CC451FCB974C3885D4" }, { "b" : "7FFC7E6DC000", "elfType" : 3, "buildId" : "C93B50C9FA1DC8F4D244BE9AF7B04649276D1913" }, { "b" : "7F6C6EAFF000", "path" : "/lib/x86_64-linux-gnu/libresolv.so.2", "elfType" : 3, "buildId" : "AD304AFCE6847F7A4D66D22853E87CCBF5A66966" }, { "b" : "7F6C6E8A0000", "path" : "/lib/x86_64-linux-gnu/libssl.so.1.0.0", "elfType" : 3, "buildId" : "153DD7C8BAA1EC675DFB847F0094D0E19A0F12DA" }, { "b" : "7F6C6E4C4000", "path" : "/lib/x86_64-linux-gnu/libcrypto.so.1.0.0", "elfType" : 3, "buildId" : "77F60431275365046D7023A1AF79FB1250F91C1A" }, { "b" : "7F6C6E2BC000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "4F930712D3609C93E380E5BE5DF73E7AD273531C" }, { "b" : "7F6C6E0B8000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "034D6A4EE9DCAB4A34ABD644345CBBB42DC63088" }, { "b" : "7F6C6DDB2000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "300C7884CDEB5667BEA2357D2B8E7A76397562D6" }, { "b" : "7F6C6DB9C000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "36311B4457710AE5578C4BF00791DED7359DBB92" }, { "b" : "7F6C6D97E000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "F64B8AD471FBA1B7A3A64EFB01551E694975E1F7" }, { "b" : "7F6C6D5B5000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "D9A10B8EF90300628DD0A3A535106967714D7328" }, { "b" : "7F6C6ED1A000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "2CA513EDC89C7BC06EC183D1A3A03CC0F606319C" } ] }}
 mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x55d564df8781]
 mongod(+0x222D999) [0x55d564df7999]
 mongod(+0x222DE7D) [0x55d564df7e7d]
 libpthread.so.0(+0x10330) [0x7f6c6d98e330]
 libc.so.6(gsignal+0x37) [0x7f6c6d5ebc37]
 libc.so.6(abort+0x148) [0x7f6c6d5ef028]
 mongod(_ZN5mongo42fassertFailedWithStatusNoTraceWithLocationEiRKNS_6StatusEPKcj+0x0) [0x55d56355963d]
 mongod(_ZN5mongo15KVStorageEngineC2EPNS_8KVEngineERKNS_22KVStorageEngineOptionsESt8functionIFSt10unique_ptrINS_26KVDatabaseCatalogEntryBaseESt14default_deleteIS8_EENS_10StringDataEPS0_EE+0x4AC) [0x55d56376e5ac]
 mongod(+0xA23F1A) [0x55d5635edf1a]
 mongod(_ZN5mongo20ServiceContextMongoD29initializeGlobalStorageEngineEv+0x637) [0x55d5637e2aa7]
 mongod(+0x928027) [0x55d5634f2027]
 mongod(_ZN5mongo11mongoDbMainEiPPcS1_+0x86C) [0x55d5635cd0cc]
 mongod(main+0x9) [0x55d56355b029]
 libc.so.6(__libc_start_main+0xF5) [0x7f6c6d5d6f45]
 mongod(+0x9F29DF) [0x55d5635bc9df]
-----  END BACKTRACE  -----
Aborted
mottosso commented 6 years ago

Hm, I can't reproduce that. :S

tokejepsen commented 6 years ago

Hm, I can't reproduce that.

Cleaning out my Docker and having another look.

I was thinking for backing up the database, to dump the bson files to samba share? Eventually I'll document this in the README.

mottosso commented 6 years ago

Yep, makes sense to me. Could even have an avalon --backup for it, that writes it to a:\backups or the like.

tokejepsen commented 6 years ago

avalon --backup would be a very nice feature!

The problem atm is that you would need the code to call mongodump inside the container, which might be assuming too many things like which container it is, or the name of it.

The other option is to do it purely with Python, but it might struggle with large amount of documents; https://jira.mongodb.org/browse/PYTHON-664?focusedCommentId=527522&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-527522 Would you say a typical project would have a lot of documents?

mottosso commented 6 years ago

I think Python should work well, I've put together Python scripts transferring 6 gb/sec without issues, so I don't think the language itself will be a problem.

On top of that, I'm looking at one instance of 14 months of Avalon and it currently occupies 90 mb. These are tiny tiny documents we're talking about. Maybe @BigRoy could share the size of his database at the moment (right-click on the top-level database, "Database Statistics").

Double on top of that, a backup taking seconds to minutes to possibly hours is fine if it only happens once a day or week.

tokejepsen commented 6 years ago

This has been implemented and documented.