xelaj / mtproto

🦋 Full-native go implementation of Telegram API
MIT License
1.23k stars 133 forks source link

Getting panic: (*serialize.BadMsgNotification) 0xc0000381c0 [recovered] on Auth example #49

Open TibebeJS opened 3 years ago

TibebeJS commented 3 years ago

I'm getting the following error when I try to run the example auth code

panic: (*serialize.BadMsgNotification) 0xc0000381c0 [recovered]
    panic: (*serialize.BadMsgNotification) 0xc0000381c0

goroutine 19 [running]:
github.com/xelaj/mtproto.(*MTProto).recoverGoroutine(0xc0001cc6c0)
    /home/das/go/pkg/mod/github.com/xelaj/mtproto@v0.0.0-20201129002337-2c42bdacb467/mtproto_utils.go:77 +0x125
panic(0x761fe0, 0xc0000381c0)
    /usr/lib/go/src/runtime/panic.go:969 +0x1b9
github.com/xelaj/mtproto.(*MTProto).processResponse(0xc0001cc6c0, 0x5fdb37fd706ac401, 0x0, 0x8425c0, 0xc00010e000, 0xc00010e000, 0x0)
    /home/das/go/pkg/mod/github.com/xelaj/mtproto@v0.0.0-20201129002337-2c42bdacb467/mtproto.go:350 +0x87e
github.com/xelaj/mtproto.(*MTProto).startReadingResponses.func1(0xc0001cc6c0, 0x842c40, 0xc0001d7dc0)
    /home/das/go/pkg/mod/github.com/xelaj/mtproto@v0.0.0-20201129002337-2c42bdacb467/mtproto.go:277 +0x296
created by github.com/xelaj/mtproto.(*MTProto).startReadingResponses
    /home/das/go/pkg/mod/github.com/xelaj/mtproto@v0.0.0-20201129002337-2c42bdacb467/mtproto.go:253 +0x70
exit status 2

What is the possible cause and how do I solve it?

quenbyako commented 3 years ago

Hey @TibebeJS!

Could you please test your code on this branch? You can switch it in two ways, if you using go mod, add commit hash (@6582c6e), if not, you can just checkout branch manualy:

cd $GOPATH/src/github.com/xelaj/mtproto
git fetch github new_decoder_refactoring
git checkout new_decoder_refactoring

If this wont help, ping me, i'll look more deep in this bug.

TibebeJS commented 3 years ago

sorry for the delay,

I have given the @6582c6e version a try (with some modification on the code since some of the API's have been changed).

But a similar issue still persists,

&objects.BadMsgNotification{
  BadMsgID:    6908692239836047620,
  BadMsgSeqNo: 1,
  Code:        17,
} nil
panic: (*objects.BadMsgNotification) 0xc0000341a0 [recovered]
    panic: (*objects.BadMsgNotification) 0xc0000341a0

goroutine 19 [running]:
github.com/xelaj/mtproto.(*MTProto).recoverGoroutine(0xc00015e6c0)
    /home/dastier/go/pkg/mod/github.com/xelaj/mtproto@v0.0.0-20201128003255-6582c6e8201a/mtproto_utils.go:77 +0x125
panic(0x66bbe0, 0xc0000341a0)
    /usr/lib/go/src/runtime/panic.go:969 +0x1b9
github.com/xelaj/mtproto.(*MTProto).processResponse(0xc00015e6c0, 0x7a8220, 0xc0000aa000, 0x58, 0x7a8220)
    /home/dastier/go/pkg/mod/github.com/xelaj/mtproto@v0.0.0-20201128003255-6582c6e8201a/mtproto.go:337 +0x725
github.com/xelaj/mtproto.(*MTProto).startReadingResponses.func1(0xc00015e6c0, 0x7a8760, 0xc000169400)
    /home/dastier/go/pkg/mod/github.com/xelaj/mtproto@v0.0.0-20201128003255-6582c6e8201a/mtproto.go:282 +0x290
created by github.com/xelaj/mtproto.(*MTProto).startReadingResponses
    /home/dastier/go/pkg/mod/github.com/xelaj/mtproto@v0.0.0-20201128003255-6582c6e8201a/mtproto.go:256 +0x70
exit status 2

my go.mod file:

module github.com/TibebeJs/mein-tg-test

go 1.15

require (
    github.com/k0kubun/pp v3.0.1+incompatible
    github.com/xelaj/go-dry v0.0.0-20201104165138-61a25872c05a
    github.com/xelaj/mtproto v0.0.0-20201128003255-6582c6e8201a
)

What could I be missing?

TibebeJS commented 3 years ago

Any Idea on what might be the underlying issue is, please?

quenbyako commented 3 years ago

@TibebeJS sorry for long delay, i came back to you after holidays

innovative-sol commented 3 years ago

Hi,

Same problem here for the "get_updates" example...

&objects.BadMsgNotification{ BadMsgID: 6933951333566766692, BadMsgSeqNo: 7, Code: 33, } &errors.withStack{ error: &errors.withMessage{ cause: &mtproto.BadMsgError{ BadMsgNotification: &objects.BadMsgNotification{ BadMsgID: 6933951333566766692, BadMsgSeqNo: 7, Code: 33, }, Description: "msg_seqno too high (similarly, there is a message with a higher msg_id but with either a lower or an equal and odd seqno)", }, msg: "processing response", }, stack: &errors.stack{ 0x6b0f93, 0x462a71, }, }

aofei commented 3 years ago

This panic occurs whenever I download a large file (about 1 GiB). I'm curious what caused it. Any guesses?

aofei commented 3 years ago

And it's always either mtproto.ErrBadMsgSeqNoTooLow or mtproto.ErrBadMsgSeqNoTooHigh.

quenbyako commented 3 years ago

Craaaap i don't know how to fix it...

Maybe we need to add additional mutex to tcp connection?

aofei commented 3 years ago

FYI:

quenbyako commented 3 years ago

@aofei thanks a lot, i'll try to experiment with this at this weekends (come back to this thread in near future, i promise!)

@Lonami if you have a little free time, it would be so amazing, if you could answer here about how to make this package better :pray: This help is REALLY necessary.

Lonami commented 3 years ago

Haven't done anything with go or heard of this package before the ping. It's going to be about impossible for me to provide any assistance. Telethon's handling of code 32/33 is a hack which really shouldn't be executed . 16 or 17 though you do need to fix your time offset and resend the corresponding message. Note that Telegram has a page explaining this. PM me on Telegram if you're developing stuff with MTProto and want to join an unofficial group of MTProto devs, maybe other peeps can help.

aofei commented 3 years ago

Hey @quenbyako, any luck so far?

I have a guess that this problem may be related to the request processing speed of the MTProto server. In fact, this problem rarely occurs if I put a rate limiting strategy (like 10 req/s) in my code.

quenbyako commented 3 years ago

@aofei: I tried to copy messages handling from telethon (thanks Lonami for hint) but no progress at al...

Someone whispered in my ear a small inside that you're not required to save seqno in session and can recreate connection starts with 1. I try this resolution on weekends and come back with response

aofei commented 3 years ago

@quenbyako

Good news, I fixed this bug by simply adding this magicMu to the MTProto.sendPacketNew (this problem never happened again in my program). However, I don't know what happened (so I called it "magicMu"). Any ideas?

var magicMu sync.Mutex

func (m *MTProto) sendPacketNew(request tl.Object, expectedTypes ...reflect.Type) (chan tl.Object, error) {
        magicMu.Lock()

        resp := make(chan tl.Object)
        if m.serviceModeActivated {
                resp = m.serviceChannel
        }
        var data []byte
        var err error
        var msgID = utils.GenerateMessageId()
        msg, err := tl.Marshal(request)
        if err != nil {
                magicMu.Unlock()
                return nil, errors.Wrap(err, "encoding request message")
        }

        if m.encrypted {
                requireToAck := false
                if MessageRequireToAck(request) {
                        m.mutex.Lock()
                        m.waitAck(msgID)
                        m.mutex.Unlock()
                        requireToAck = true
                }

                data, err = (&messages.Encrypted{
                        Msg:         msg,
                        MsgID:       msgID,
                        AuthKeyHash: m.authKeyHash,
                }).Serialize(m, requireToAck)
                if err != nil {
                        magicMu.Unlock()
                        return nil, errors.Wrap(err, "serializing message")
                }

                if !isNullableResponse(request) {
                        m.mutex.Lock()
                        m.responseChannels[msgID] = resp
                        if len(expectedTypes) > 0 {
                                m.expectedTypes[msgID] = expectedTypes
                        }
                        m.mutex.Unlock()
                } else {
                        // ответов на TL_Ack, TL_Pong и пр. не требуется
                        go func() {
                                // горутина, т.к. мы ПРЯМО СЕЙЧАС из resp не читаем
                                resp <- &objects.Null{}
                        }()
                }
                // этот кусок не часть кодирования так что делаем при отправке
                m.lastSeqNo += 2
        } else {
                data, _ = (&messages.Unencrypted{ //nolint: errcheck нешифрованое не отправляет ошибки
                        Msg:   msg,
                        MsgID: msgID,
                }).Serialize(m)
        }

        magicMu.Unlock()

        //? https://core.telegram.org/mtproto/mtproto-transports#intermediate
        size := make([]byte, 4)
        binary.LittleEndian.PutUint32(size, uint32(len(data)))
        _, err = m.conn.Write(size)
        if err != nil {
                return nil, errors.Wrap(err, "sending data")
        }

        //? https://core.telegram.org/mtproto/mtproto-transports#abridged
        // _, err := m.conn.Write(utils.PacketLengthMTProtoCompatible(data))
        // check(err)
        _, err = m.conn.Write(data)
        if err != nil {
                return nil, errors.Wrap(err, "sending request")
        }

        return resp, nil
}
Lonami commented 3 years ago

Sounds like a data race; not sure how the library is organized but if there's concurrent access to the MTP state (like message ID or sequence number), that should indeed be behind a lock.

quenbyako commented 3 years ago

@aofei oooow i got it! problem is that multiple requests getting seqno in parallel but it's illegal for telegram service. F.e.:

req1: get me seqno
mtproto: locks
req2: get me seqno
mtproto: return seqno for req1, unlocks
mtproto: return seqno for req2, unlocks
req2: send my message with this seqno
req1: send my message with this seqno
telegram service: oopsie! req2 is to far, i expected req1

Boom, it returns bad message.

Fuck, i should notice this earlier, seqno mutex and request is single one instead twot different.

quenbyako commented 3 years ago

@aofei i'll refactor full method, and commit today or tommorow, thanks for help! @Lonami you too 😎

quenbyako commented 3 years ago

@innovative-sol @TibebeJS @aofei looks like we fixed it. Try it out with the latest commit and give us a little feedback, please. Hope that for you it works as well as for me.

aofei commented 3 years ago

@quenbyako

Bad news, I still get this error in my tests. 😥

aofei commented 3 years ago

Perhaps we should ensure that MTProto.sendPacket is always thread-safe, which means putting the entire MTProto.sendPacket method behind a lock (not just this). And it's reasonable to do so, well, it's a TCP connection after all.

aofei commented 3 years ago

It seems that the msg.seqNo is set by this line, so we must ensure thread safety from this line until the entire request is sent.

quenbyako commented 3 years ago

@aofei 1) MTProto.sendPacket not thread safe by design (each request can be prepared asynchronously and then sent synchronously). So, all stuff behind m.seqNoMutext is actual preparation stuff. Btw, glad that you noticed, m.seqNoMutext can be called even after message creation. 2) this is request marshaler, not message one. imagine that request (tl.Object) is like json, and message is http messages, same mechanics

MJDrgn commented 3 years ago

From logging inside sendPacket, I've found two issues that seem to be causing this together. Firstly, the current logic only increments seqNo if the message was encrypted. However, the Telegram spec says this should be conditional on the message requiring acknowledgement, with no mention of encrypted: "those requiring acknowledgment, and in particular those that are not containers".

Secondly, the error occurs after two messages with identical msgID happen - not seqNo misordering or thread safety as thought before. I have identified that in some cases the response comes back with a RpcError, which makeRequest handles in this case by submitting a second recursive call to m.makeRequest (mtproto.go:199) This happens quickly enough that the GenerateMessageId function returns an identical ID - it appears the UnixNano function does not have sufficient resolution. From experience on a previous project this value is often rounded - https://github.com/golang/go/issues/27301 covers this issue.

There are two potential fixes - one is to use the suggested method in that thread of storing a Start time and using time.Sub to get nano-second precision that way. The second, which I've tested, is to move the seqNoMutex lock to before GenerateMessageId, then changed GenerateMessageId to store the last generated value - if this was identical to the newly generated value, simply increment by 4.

var lastMessageId int64

// GenerateMessageId отдает по сути unix timestamp но ужасно специфическим образом
// TODO: нахуя нужно битовое и на -4??
func GenerateMessageId() (r int64) {
    const billion = 1000 * 1000 * 1000
    unixnano := time.Now().UnixNano()
    seconds := unixnano / billion
    nanoseconds := unixnano % billion
    r = (seconds << 32) | (nanoseconds & -4)
    if r == lastMessageId {
        r += 4
    }
    lastMessageId = r
    return r
}

This works and solves the issue, as duplicate IDs won't occur. However it would break potentially if any other function than sendPacket called this utility function - I'd suggest moving it into network.go if this fix is to be used, so it can be a private method.

Unfortunately I'm not able to fully test this fix due to https://github.com/xelaj/mtproto/issues/86 blocking me from further testing - but I have gone from receiving the BadMsgNotification on almost all runs to never after implementing this change.

Let me know if you'd like a PR of a suggested full fix.

hookzof commented 3 years ago

@MJDrgn Thanks, it seems to be working for me.

quenbyako commented 3 years ago

@hookzof do you have any patch which i can merge? @MJDrgn is right about unixnano, it is rounding, so maybe it's better to add to nanoseconds random value from 0 to 999, i don't believe that this could fix this issue, but it looks important.

hookzof commented 3 years ago

@quenbyako I did, there may be some additions from @MJDrgn.

Errorcodertv3 commented 3 years ago

@quenbyako panic: (*objects.BadMsgNotification) 0xc0001d4020 help me ?

EniMusguera commented 2 years ago

Auth. &objects.BadMsgNotification{ BadMsgID: 7144231798508581348, BadMsgSeqNo: 7, Code: 33, } panic: (*objects.BadMsgNotification) 0xc000037d00. sad