boltdb / bolt

An embedded key/value database for Go.
MIT License
14.21k stars 1.51k forks source link

panic: page 87 already freed #348

Closed davecgh closed 8 years ago

davecgh commented 9 years ago

I've been doing a bit of corruption testing under unexpected power off scenarios (using an external USB drive and disconnecting it midstream) and, unfortunately, I'm fairly consistently hitting unrecoverable database corruption issues due to things such as unreachable unfreed or already freed. I understand write corruption in this scenario is expected, but the database shouldn't be left in an unrecoverable state.

EDIT: I should also note this is on Windows 7 using all default bolt options.

I have a 4KiB database that is corrupted from this scenario that I'll keep around to help debug. To get started, I'm providing the following pertinent details:

goroutine 12 [running]:
github.com/btcsuite/bolt.(*freelist).free(0xc08202b680, 0xca2, 0x2af7000)
       .../bolt/freelist.go:118 +0x35c
github.com/btcsuite/bolt.(*node).spill(0xc0820e2380, 0x0, 0x0)
        .../bolt/node.go:349 +0x280
github.com/btcsuite/bolt.(*node).spill(0xc0820e2310, 0x0, 0x0)
        .../bolt/node.go:336 +0x134
github.com/btcsuite/bolt.(*node).spill(0xc0820e22a0, 0x0, 0x0)
        .../bolt/node.go:336 +0x134
github.com/btcsuite/bolt.(*Bucket).spill(0xc0820dc7c0, 0x0, 0x0)
        .../bolt/bucket.go:536 +0x1cb
github.com/btcsuite/bolt.(*Bucket).spill(0xc0820dc780, 0x0, 0x0)
        .../bolt/bucket.go:503 +0xac9
github.com/btcsuite/bolt.(*Bucket).spill(0xc0820e65c8, 0x0, 0x0)
        .../bolt/bucket.go:503 +0xac9
github.com/btcsuite/bolt.(*Tx).Commit(0xc0820e65b0, 0x0, 0x0)
        .../bolt/tx.go:150 +0x1f5
...
$ bolt pages test.db

ID       TYPE       ITEMS  OVRFLW
======== ========== ====== ======
0        meta       0            
1        meta       0            
2        leaf       14           
3        leaf       19           
4        leaf       19           
5        leaf       17           
6        leaf       17           
7        leaf       27           
8        leaf       22           
9        leaf       17           
10       leaf       17           
11       leaf       15           
12       leaf       19           
13       leaf       26           
14       leaf       14           
15       leaf       17           
16       leaf       25           
17       leaf       17           
18       leaf       16           
19       leaf       22           
20       leaf       17           
21       leaf       15           
22       leaf       27           
23       leaf       22           
24       leaf       16           
25       leaf       16           
26       leaf       24           
27       leaf       24           
28       leaf       27           
29       leaf       23           
30       leaf       23           
31       leaf       18           
32       leaf       26           
33       leaf       16           
34       leaf       22           
35       leaf       16           
36       leaf       25           
37       leaf       19           
38       leaf       23           
39       leaf       14           
40       leaf       25           
41       leaf       28           
42       leaf       16           
43       leaf       16           
44       leaf       16           
45       leaf       20           
46       leaf       18           
47       leaf       26           
48       leaf       18           
49       leaf       24           
50       leaf       28           
51       leaf       23           
52       leaf       21           
53       leaf       18           
54       leaf       24           
55       leaf       20           
56       leaf       27           
57       leaf       18           
58       leaf       18           
59       leaf       23           
60       leaf       28           
61       leaf       26           
62       leaf       17           
63       leaf       16           
64       leaf       26           
65       leaf       19           
66       leaf       15           
67       leaf       14           
68       leaf       17           
69       leaf       18           
70       leaf       27           
71       leaf       21           
72       leaf       23           
73       leaf       16           
74       leaf       19           
75       leaf       23           
76       leaf       19           
77       leaf       21           
78       leaf       17           
79       leaf       24           
80       leaf       25           
81       leaf       16           
82       leaf       27           
83       leaf       15           
84       leaf       21           
85       leaf       22           
86       leaf       23           
87       freelist   6            
88       leaf       16           
89       leaf       15           
90       leaf       22           
91       leaf       19           
92       leaf       14           
93       leaf       18           
94       leaf       19           
95       leaf       18           
96       leaf       21           
97       leaf       23           
98       leaf       19           
99       leaf       25           
100      leaf       19           
101      leaf       17           
102      leaf       22           
103      leaf       16           
104      leaf       25           
105      leaf       20           
106      leaf       17           
107      leaf       22           
108      leaf       15           
109      leaf       16           
110      leaf       21           
111      leaf       15           
112      leaf       14           
113      leaf       27           
114      leaf       17           
115      leaf       15           
116      leaf       21           
117      leaf       26           
118      leaf       26           
119      leaf       22           
120      leaf       20           
121      leaf       16           
122      leaf       25           
123      leaf       15           
124      leaf       18           
125      leaf       18           
126      leaf       16           
127      leaf       28           
128      leaf       18           
129      leaf       17           
130      leaf       19           
131      leaf       25           
132      leaf       26           
133      leaf       24           
134      leaf       18           
135      leaf       26           
136      leaf       20           
137      leaf       18           
138      leaf       23           
139      leaf       26           
140      leaf       15           
141      leaf       17           
142      leaf       24           
143      leaf       16           
144      leaf       16           
145      leaf       22           
146      leaf       15           
147      leaf       18           
148      leaf       26           
149      leaf       14           
150      leaf       14           
151      leaf       14           
152      leaf       15           
153      leaf       15           
154      leaf       17           
155      leaf       27           
156      leaf       22           
157      branch     80           
158      leaf       15           
159      leaf       15           
160      leaf       15           
161      leaf       15           
162      leaf       15           
163      leaf       15           
164      branch     42           
165      branch     45           
166      free                    
167      branch     4            
168      free                    
169      leaf       2            
170      leaf       1            
171      free                    
172      free                    
173      free                    
174      free                    
$ bolt check test.db
page 87: reachable freed
page 105: multiple references
page 83: multiple references
page 87: multiple references
page 87: reachable freed
page 97: multiple references
page 97: multiple references
page 97: multiple references
page 157: multiple references
page 21: multiple references
page 20: multiple references
page 87: multiple references
page 87: reachable freed
page 27: multiple references
page 35: multiple references
page 129: multiple references
page 132: multiple references
page 156: multiple references
page 82: multiple references
page 80: multiple references
page 139: multiple references
page 60: multiple references
page 121: multiple references
page 153: multiple references
page 30: multiple references
page 7: multiple references
page 112: multiple references
page 9: multiple references
page 127: multiple references
page 118: multiple references
page 74: multiple references
page 106: multiple references
page 15: multiple references
page 78: multiple references
page 65: multiple references
page 39: multiple references
page 159: multiple references
page 63: multiple references
page 124: multiple references
page 107: multiple references
page 3: multiple references
page 16: multiple references
page 67: multiple references
page 163: multiple references
page 29: multiple references
page 92: multiple references
page 105: multiple references
page 105: multiple references
page 13: multiple references
page 85: multiple references
page 109: multiple references
page 137: multiple references
page 48: multiple references
page 138: multiple references
page 75: multiple references
page 28: multiple references
page 83: multiple references
page 55: multiple references
page 32: multiple references
page 34: multiple references
page 84: multiple references
page 51: multiple references
page 53: multiple references
page 126: multiple references
page 19: multiple references
page 23: multiple references
page 95: multiple references
page 96: multiple references
page 5: multiple references
page 37: multiple references
page 130: multiple references
page 149: multiple references
page 161: multiple references
page 72: multiple references
page 73: multiple references
page 116: multiple references
page 113: multiple references
page 18: multiple references
page 125: multiple references
page 146: multiple references
page 102: multiple references
page 134: multiple references
page 76: multiple references
page 114: multiple references
page 62: multiple references
page 66: multiple references
page 103: multiple references
page 97: multiple references
page 83: multiple references
page 87: multiple references
page 87: reachable freed
**Hangs here and never completes** 
benbjohnson commented 9 years ago

Thanks for doing corruption testing. Bolt definitely should not be corrupting due to power loss.

The huge number of corrupt pages suggests to me that the pages were written out of order in the transaction. Normally bolt writes all the data pages, fsyncs, writes the meta page, then fsyncs again. That way if a failure occurs during writes to the data pages then a meta page is never written which means that those data pages are effectively uncommitted.

I'll do some testing with a USB stick. Can you tell me what USB drive you're using? I don't think it should matter but it's good to narrow down the issue if possible. Also, I'll take a look at how Windows does file syncing to make sure it's correct in Bolt.

I'm about to go to sleep but I'll look at this first thing in the morning.

benbjohnson commented 9 years ago

Also, can you send me the script you used during testing and any steps you performed during the test? e.g. how long did the test run before you pulled the USB?

davecgh commented 9 years ago

Thanks for a quick response.

I'm using a (quite old) Kingston DataTraveler 1GB USB stick. I don't have a script at the moment as it was discovered as a part of a larger project I'm working on (storing bitcoin blocks on the file system while using bolt to store the metadata for their locations, headers, transactions indexing, etc).

I will put something smaller together to help track this down as I know how important it is to have a targeted and reproducible test program for situations like this.

EDIT: Oh, I didn't answer your timing question. I have typically been pulling it within 2-3 seconds of starting the writes, however I've waited up to 10 seconds and encountered the same results. Also, I should note that it doesn't happen every time. Most of the time, there are no issues, but I am seeing the corruption issues maybe 1 out of 50 times.

benbjohnson commented 9 years ago

@davecgh I've been running bolt bench with the following parameters this morning against an Amazon Basics 16GB USB drive:

$ bolt bench --batch-size 1 --count 10000 --work --path /Volumes/USB16/bolt.db

It didn't take me long to hit corruption although the corruption is strange. My meta pages show the following:

// meta0
root:     24
freelist: 25
hwm:      26
txnid:    630
// meta1
root:     7
freelist: 10
hwm:      26
txnid:    629

But in the data file itself it shows nothing written to page 14 or higher. It's all just zeros filled in by truncate(). Since the previous transaction (meta1, txnid 629) had written it's high water mark up to page 26, there should be data in pages 15 through 26.

So I googled around and it looks like there's a write cache in front of the USB drive on Windows (and on Mac). I can't for the life of me get it disabled on my Mac because the mounting goes through diskutil mount instead of the standard Unix mount.

It looks like there's a bunch of articles on Google about disabling it on Windows though. Can you try disabling write caching and see if you still have the issue?

xiang90 commented 9 years ago

@davecgh @benbjohnson

I am not sure if it is just a USB issue. I did some similar testing with read disk, I found no issues.

Can you reproduce the issue on aws disk? @benbjohnson

davecgh commented 9 years ago

@benbjohnson I have been running with write caching disabled already. The issue still periodically happens. I was able to reproduce it with the bench tool as well (although I had to add the --path option for the bench sub-command as I don't see that on master).

$ git diff bench.go main.go
diff --git a/cmd/bolt/bench.go b/cmd/bolt/bench.go
index 91af960..1e51e76 100644
--- a/cmd/bolt/bench.go
+++ b/cmd/bolt/bench.go
@@ -32,7 +32,12 @@ func Bench(options *BenchOptions) {
        }

        // Find temporary location.
-       path := tempfile()
+       path := options.Path
+       if path == "" {
+               path = tempfile()
+       }

        if options.Clean {
                defer os.Remove(path)
@@ -368,6 +373,7 @@ type BenchOptions struct {
        FillPercent   float64
        NoSync        bool
        Clean         bool
+       Path          string
 }

 // BenchResults represents the performance results of the benchmark.
diff --git a/cmd/bolt/main.go b/cmd/bolt/main.go
index 183d1f2..7e8e908 100644
--- a/cmd/bolt/main.go
+++ b/cmd/bolt/main.go
@@ -99,6 +99,7 @@ func NewApp() *cli.App {
                                &cli.Float64Flag{Name: "fill-percent", Value: bolt.DefaultFillPercent, Usage: "Fill percentage"},
                                &cli.BoolFlag{Name: "no-sync", Usage: "Skip fsync on every commit"},
                                &cli.BoolFlag{Name: "work", Usage: "Print the temp db and do not delete on exit"},
+                               &cli.StringFlag{Name: "path", Usage: "Specify the db path"},
                        },
                        Action: func(c *cli.Context) {
                                statsInterval, err := time.ParseDuration(c.String("stats-interval"))
@@ -121,6 +122,7 @@ func NewApp() *cli.App {
                                        FillPercent:   c.Float64("fill-percent"),
                                        NoSync:        c.Bool("no-sync"),
                                        Clean:         !c.Bool("work"),
+                                       Path:          c.String("path"),
                                })
                        },
                }}
nicpottier commented 9 years ago

I've also run into this stress testing.. but in my case I'm going against an SSD on OSX. What can I provide to help debug? I still have the db, but it is 2 gigs..

mdmarek commented 9 years ago

I've also run into this issue using SSD on Google Compute Engine instances. Bolt build from: https://github.com/boltdb/bolt/commit/033d4ec028192f38aef67ae47bd7b89f343145b5

xiang90 commented 9 years ago

Any progress on this?

epsniff commented 9 years ago

I work with @mdmarek, we've hitting this panic more often now. Same stack trace as posted by the original poster. In our case, we call tx.Commit() on a timed interval and we've only seen the panic for databases under heavy loads (large commits). We've been able to recover from this, but letting people know we're still hitting it.

jezell commented 8 years ago

Also seeing this error frequently, but bolt check doesn't show any db corruption. Panic is

page 14 already freed:
...
gopkg.in/boltdb/bolt%2ev1.(*freelist).free(0xc208110c00, 0x17c, 0x7f2e83109000)
        /go/src/gopkg.in/boltdb/bolt.v1/freelist.go:117 +0x355
gopkg.in/boltdb/bolt%2ev1.(*node).spill(0xc208014690, 0x0, 0x0)
        /go/src/gopkg.in/boltdb/bolt.v1/node.go:358 +0x279
gopkg.in/boltdb/bolt%2ev1.(*node).spill(0xc208014620, 0x0, 0x0)
        /go/src/gopkg.in/boltdb/bolt.v1/node.go:345 +0x12d
gopkg.in/boltdb/bolt%2ev1.(*Bucket).spill(0xc20810d2c0, 0x0, 0x0)
        /go/src/gopkg.in/boltdb/bolt.v1/bucket.go:540 +0x1c4
gopkg.in/boltdb/bolt%2ev1.(*Bucket).spill(0xc2080371f8, 0x0, 0x0)
        /go/src/gopkg.in/boltdb/bolt.v1/bucket.go:507 +0xac2
gopkg.in/boltdb/bolt%2ev1.(*Tx).Commit(0xc2080371e0, 0x0, 0x0)
        /go/src/gopkg.in/boltdb/bolt.v1/tx.go:154 +0x1ee
gopkg.in/boltdb/bolt%2ev1.(*DB).Update(0xc208038b60, 0xc2080c9810, 0x0, 0x0)
        /go/src/gopkg.in/boltdb/bolt.v1/db.go:554 +0x169
xiang90 commented 8 years ago

@jezell @epsniff @benbjohnson @nicpottier @gyuho I really think we should get this fixed. Can anyone provide a way to reproduce this? I would like to look into it.

benbjohnson commented 8 years ago

@xiang90 I don't have a way to reproduce right now. I'm doing some refactoring and clean up on the test suite in order to get better, long running simulation testing. I think that'll help to reproduce these types of issues.

newhook commented 8 years ago

We're having major troubles with one of our production bolt databases continually dying with an error similar to @jezell

benbjohnson commented 8 years ago

@newhook @jezell can you give me any details about your usage of bolt? DB size, key & value sizes, sequential writes or random writes, delete frequency, nested buckets, etc. Anything really. I'm trying to reproduce the issue but I'm not sure what's triggering it right now. It seems to happen to a few people frequently but other people it doesn't happen at all.

jezell commented 8 years ago

We had been using bolt for a git LFS implementation that was running on top of GlusterFS. We were able to mitigate the issue by migrating everything away from GlusterFS and moving bolt to run directly on top of mounted SSD storage. The git repos were not experiencing any corruption, but the bolt db databases used to track the LFS chunks were being corrupted regularly. I suspect it was related to some kind of concurrency or fsync type problem with Gluster and how Bolt writes to disk, since corruption hasn't occurred since we eliminated Gluster from the equation.

newhook commented 8 years ago

We're running on a RAID 5 3xSSD (INTEL SSDSC2BB800H4) cluster on Ubuntu 14 using LSI MegaRAID SAS 9271-4i. The application was running for a long time (like 4 months or so) until the corruption occurred.

benbjohnson commented 8 years ago

@newhook Can you post your stack trace too? Also, what bolt commit are you using?

@jezell Are you still using gopkg.in/boltdb/bolt.v1? I wonder if master would work better.

xiang90 commented 8 years ago

@benbjohnson Did we fix anything related recently?

newhook commented 8 years ago
panic: page 4 already freed

goroutine 818021 [running]:
github.com/boltdb/bolt.(*freelist).free(0xc2fd236c90, 0x5b0, 0x7ecaaacc4000)
        /source/.gopack/vendor/src/github.com/boltdb/bolt/freelist.go:117 +0x355
github.com/boltdb/bolt.(*node).spill(0xc48b570e70, 0x0, 0x0)
        /source/.gopack/vendor/src/github.com/boltdb/bolt/node.go:358 +0x279
github.com/boltdb/bolt.(*Bucket).spill(0xc55bcffdc0, 0x0, 0x0)
        /source/.gopack/vendor/src/github.com/boltdb/bolt/bucket.go:536 +0x1c4
github.com/boltdb/bolt.(*Bucket).spill(0xc55bcffd80, 0x0, 0x0)
        /source/.gopack/vendor/src/github.com/boltdb/bolt/bucket.go:503 +0xac2
github.com/boltdb/bolt.(*Bucket).spill(0xc6581b6b78, 0x0, 0x0)
        /source/.gopack/vendor/src/github.com/boltdb/bolt/bucket.go:503 +0xac2
github.com/boltdb/bolt.(*Tx).Commit(0xc6581b6b60, 0x0, 0x0)
        /source/.gopack/vendor/src/github.com/boltdb/bolt/tx.go:151 +0x1ee
github.com/boltdb/bolt.(*DB).Update(0xc27dc99380, 0xca1e051d68, 0x0, 0x0)
        /source/.gopack/vendor/src/github.com/boltdb/bolt/db.go:554 +0x169
customerio/index/index.(*DBMBolt).UpdateShard(0xc20800d5f0, 0x90da, 0xc5a9d20a50, 0x0, 0x0)
        /source/src/customerio/index/index/db_mbolt.go:448 +0xc9
customerio/index/index.func·057(0xc5000090da)
        /source/src/customerio/index/index/db_memory.go:359 +0x503
created by customerio/index/index.(*DBMemory).Flush
        /source/src/customerio/index/index/db_memory.go:367 +0x34a
benbjohnson commented 8 years ago

@xiang90 There haven't been many code changes except the CoreOS ones recently. v1 is from Nov 2014. There have been a decent number of changes between v1 and v1.1 which released in Oct 2015.

benbjohnson commented 8 years ago

@xiang90 Are you seeing this issue?

xiang90 commented 8 years ago

@benbjohnson No. But i would love to fix it if anyone can reliably reproduce it.

newhook commented 8 years ago

@benbjohnson I'm not sure exactly what version, but chances are it was whatever was current on Jul 8 11:18:40 2015.

ghost commented 8 years ago

@benbjohnson the version @newhook is using is commit 04a3e85793043e76d41164037d0d7f9d53eecae3 (work with him and just checked the vendored source)

epsniff commented 8 years ago

@benbjohnson I wrote some code to reproduce this. So far I've gotten to trigger twice after running for about 10mins. Ill post a link in few mins.

epsniff commented 8 years ago

It's not consistent how long it needs to run before it panics, but it's created the panic 3 out of 3 times I've run it... Two times within 10 mins, but once it ran longer.. https://gist.github.com/epsniff/f919369dccb4d5c13504

Im running on an 8 core i7 cpu, 8GB of ram, with a Sony SSD drive. Running Ubuntu 14, but Ive seen it on Debian too.

xiang90 commented 8 years ago

@epsniff Can you try to reproduce (confirm it can be reproduced) on a Cloud env? Or I can try to do it next week. Then it would be easier for us to look into the issue.

epsniff commented 8 years ago

@xiang90 I was able to reproduce it on a Google Compute Engine(GCE) instance of type n1-standard-4 (4 vCPUs, 15 GB memory). With a 200GB (standard persistent disk). Go 1.5.2, Debian 7. The version of BoltDB has commit 033d4ec028192f38aef67ae47bd7b89f343145b5 as the last commit.

benbjohnson commented 8 years ago

@epsniff Awesome! That helps a ton. Also, what Go version are you running? And do you know what file system your /tmp directory is running on?

epsniff commented 8 years ago

Go 1.5.2 And /tmp was a standard persistent disk, which is equivalent to AWS's EBS General Purpose (SSD) disks I believe.

xiang90 commented 8 years ago

@epsniff Great! I will try and get back to you guys next week.

epsniff commented 8 years ago

The disks were formatted with ext4...

benbjohnson commented 8 years ago

@epsniff I'm able to get the panic on my MBP after about 20m. That gives me something to work with! :boom:

epsniff commented 8 years ago

@benbjohnson, @xiang90 did either of you ever get a chance to work on this one?

xiang90 commented 8 years ago

@epsniff Sorry. I should have. I will try this week.

benbjohnson commented 8 years ago

@epsniff I've spent time on it and made progress to reproduce it consistently but I haven't had time to resolve the issue yet.

benbjohnson commented 8 years ago

I found the issue. Fixed with https://github.com/boltdb/bolt/pull/539.

@epsniff Thanks again for the program to reproduce. I refactored it a bit to continuously run in smaller chunks until it panics and then it saves the transaction log so it can be replayed:

https://gist.github.com/benbjohnson/9d2ebbc90b8b52f3fe25

I ran the script to reproduce for 24h without any issue. Previously it failed in 10-20m. If anyone else wants to test the changes, let me know.

newhook commented 8 years ago

That is awesome, thanks so much Ben!

epsniff commented 8 years ago

@benbjohnson thank you for fixing it.

zllak commented 7 years ago

We are using boltdb vendored at commit 583e8937c61f1af6513608ccc75c97b6abdf4ff9 (v1.3.0). Unfortunately, I just got this panic, running gomobile on iPhone 6s, iOS 10 (go1.7) We are doing multiple write transactions with fsync disable.

panic: page 311 already freed

goroutine 16 [running]:
panic(0x100d91820, 0x10e3bc710)
    /usr/local/go/src/runtime/panic.go:500 +0x390
github.com/.../vendor/github.com/boltdb/bolt.(*freelist).free(0x10d84ec00, 0x3de, 0x12fb9c000)
    /Users/zllak/work/go/src/github.com/.../vendor/github.com/boltdb/bolt/freelist.go:117 +0x280
github.com/.../vendor/github.com/boltdb/bolt.(*node).spill(0x10e458540, 0x0, 0x0)
    /Users/zllak/work/go/src/github.com/.../vendor/github.com/boltdb/bolt/node.go:363 +0x24c
github.com/.../vendor/github.com/boltdb/bolt.(*node).spill(0x10e247b90, 0x0, 0x0)
    /Users/zllak/work/go/src/github.com/.../vendor/github.com/boltdb/bolt/node.go:350 +0x108
github.com/.../vendor/github.com/boltdb/bolt.(*Bucket).spill(0x10e33f300, 0x0, 0x0)
    /Users/zllak/work/go/src/github.com/.../vendor/github.com/boltdb/bolt/bucket.go:541 +0x190
github.com/.../vendor/github.com/boltdb/bolt.(*Bucket).spill(0x10da5c2b8, 0x0, 0x0)
    /Users/zllak/work/go/src/github.com/.../vendor/github.com/boltdb/bolt/bucket.go:508 +0x840
github.com/.../vendor/github.com/boltdb/bolt.(*Tx).Commit(0x10da5c2a0, 0x0, 0x0)
    /Users/zllak/work/go/src/github.com/.../vendor/github.com/boltdb/bolt/tx.go:163 +0x19c
github.com/.../vendor/github.com/boltdb/bolt.(*DB).Update(0x10d6d0b40, 0x10ec0ba18, 0x0, 0x0)
    /Users/zllak/work/go/src/github.com/.../vendor/github.com/boltdb/bolt/db.go:602 +0x12c
...

Should I open a new issue or we could reopen this one ? cc @benbjohnson @xiang90 @davecgh

ghost commented 7 years ago

In the test speed also encountered

I will try to be able to reproduce

"C:\Program Files (x86)\JetBrains\Gogland 163.10615.6\bin\runnerw.exe" C:/Go\bin\go.exe run G:/编程/golang/src/github.com/gamexg/go-test/数据库/boltdb/1.go
panic: page 178 already freed

goroutine 8846 [running]:
panic(0x4c8980, 0xc04235b890)
    C:/Go/src/runtime/panic.go:500 +0x1af
github.com/boltdb/bolt.(*freelist).free(0xc04203ff50, 0x2254, 0x2522000)
    G:/编程/golang/src/github.com/boltdb/bolt/freelist.go:117 +0x2c7
github.com/boltdb/bolt.(*node).spill(0xc04230a620, 0xc042462280, 0xc042462280)
    G:/编程/golang/src/github.com/boltdb/bolt/node.go:358 +0x1e3
github.com/boltdb/bolt.(*node).spill(0xc04230a5b0, 0xc0424d6630, 0xc0421119e0)
    G:/编程/golang/src/github.com/boltdb/bolt/node.go:345 +0xbe
github.com/boltdb/bolt.(*Bucket).spill(0xc0423bae80, 0xc0424d6500, 0xc042111c50)
    G:/编程/golang/src/github.com/boltdb/bolt/bucket.go:541 +0x442
github.com/boltdb/bolt.(*Bucket).spill(0xc04206d898, 0x219cf650, 0x566800)
    G:/编程/golang/src/github.com/boltdb/bolt/bucket.go:508 +0x942
github.com/boltdb/bolt.(*Tx).Commit(0xc04206d880, 0x0, 0x0)
    G:/编程/golang/src/github.com/boltdb/bolt/tx.go:163 +0x12c
github.com/boltdb/bolt.(*DB).Update(0xc042078000, 0xc04225bed0, 0x0, 0x0)
    G:/编程/golang/src/github.com/boltdb/bolt/db.go:602 +0x114
github.com/boltdb/bolt.(*batch).run(0xc042249300)
    G:/编程/golang/src/github.com/boltdb/bolt/db.go:722 +0x106
github.com/boltdb/bolt.(*batch).(github.com/boltdb/bolt.run)-fm()
    G:/编程/golang/src/github.com/boltdb/bolt/db.go:696 +0x31
sync.(*Once).Do(0xc042249310, 0xc04225bf58)
    C:/Go/src/sync/once.go:44 +0xe2
github.com/boltdb/bolt.(*batch).trigger(0xc042249300)
    G:/编程/golang/src/github.com/boltdb/bolt/db.go:696 +0x53
github.com/boltdb/bolt.(*batch).(github.com/boltdb/bolt.trigger)-fm()
    G:/编程/golang/src/github.com/boltdb/bolt/db.go:666 +0x31
created by time.goFunc
    C:/Go/src/time/sleep.go:154 +0x4b
exit status 2

windows 7 in esxi@nfs

package main

import (
    "log"

    "fmt"
    "time"

    "os"

    "github.com/boltdb/bolt"
)

func test1() {
    // Open the my.db data file in your current directory.
    // It will be created if it doesn't exist.
    os.Remove("test1.db")
    db, err := bolt.Open("test1.db", 0600, nil)
    if err != nil {
        log.Fatal(err)
    }
    defer db.Close()

    err = db.Update(func(tx *bolt.Tx) error {
        _, err := tx.CreateBucketIfNotExists([]byte("MyBucket"))
        return err
    })
    if err != nil {

    }

    sTime := time.Now()
    for i := 0; i < 10000; i++ {
        err = db.Update(func(tx *bolt.Tx) error {
            b := tx.Bucket([]byte("MyBucket"))
            err := b.Put([]byte(fmt.Sprint("key", i)), []byte("00000000000000000000"))
            return err
        })
    }
    //每次创建 Bucket 耗时: 33.2749032s
    fmt.Println("每次创建 Bucket 耗时:", time.Now().Sub(sTime))
}

func test2() {
    // Open the my.db data file in your current directory.
    // It will be created if it doesn't exist.
    os.Remove("test2.db")
    db, err := bolt.Open("test2.db", 0600, nil)
    if err != nil {
        log.Fatal(err)
    }
    defer db.Close()

    err = db.Update(func(tx *bolt.Tx) error {
        _, err := tx.CreateBucketIfNotExists([]byte("MyBucket"))
        return err
    })
    if err != nil {

    }

    sTime := time.Now()
    for i := 0; i < 10000; i++ {
        err = db.Batch(func(tx *bolt.Tx) error {
            b := tx.Bucket([]byte("MyBucket"))
            err := b.Put([]byte(fmt.Sprint("key", i)), []byte("00000000000000000000"))
            return err
        })
    }

    fmt.Println("批量操作:", time.Now().Sub(sTime))
}
func test3() {
    // Open the my.db data file in your current directory.
    // It will be created if it doesn't exist.
    os.Remove("test3.db")
    db, err := bolt.Open("test3.db", 0600, nil)
    if err != nil {
        log.Fatal(err)
    }
    defer db.Close()

    err = db.Update(func(tx *bolt.Tx) error {
        _, err := tx.CreateBucketIfNotExists([]byte("MyBucket"))
        return err
    })
    if err != nil {

    }

    sTime := time.Now()
    err = db.Update(func(tx *bolt.Tx) error {
        b := tx.Bucket([]byte("MyBucket"))
        for i := 0; i < 10000; i++ {
            err := b.Put([]byte(fmt.Sprint("key", i)), []byte("00000000000000000000"))
            if err != nil {
                return err
            }
        }
        return nil
    })
    //单次事务执行: 100.0057ms
    fmt.Println("单次事务执行:", time.Now().Sub(sTime))
}

func main() {
    //test1()
    test2()
}
ghost commented 7 years ago

The g-drive is a mapped NFS @ FreeNAS network drive. In the g-disk test will be panic. Panic does not appear on the local disk test.