Open gems2020 opened 6 years ago
Thanks for your report. We'll investigate the issues in more depth in a couple of days (currently focusing on a different part of SLikeNet). If the issues you reported is something you'd need dealt with right away, pls let me know and I'll try to shift priorities.
Furthermore, if it's fine with you, could you sign/agree with the CLA so we are legal wise on the safe side to use your report/contributions in SLikeNet (see https://github.com/SLikeSoft/SLikeNet/blob/master/.github/CONTRIBUTING.md ).
Doing a quick initial review of your reports: @Issue 1: You suggest that otherUser is not allocated. I take it you mean that it's not initialized before use (or am I getting you wrong here)? If you mean it's not initialized indeed, could you please elaborate? As far as my quick code review goes, it should be initialized correctly to either the sender or the recipient (see ln 313 ff.). Also the if-check you suggest fixing the issue should always result to true, since the sender and recipient are ensured to never be the same user (see NatPunchthroughServer::OnNATPunchthroughRequest() ln 403). Am I missing something?
@Issue 2: Do you have a repro demonstrating integer overflow? Since this might be a security issue, please contact us at security@slikesoft.com. I'd rather not reveal further details publicly before we can determine the impact. Understand however that as far as I can tell the suggested changes would not suffice here, if this indeed is an exploitable integer overflow.
issue 1:
The process of error is as follows.
line 334 : insert line 337 : deallocation line 345 -> line 584 : Reference error
issue 2: Since my source has already been modified several times, it is difficult to reproduce it immediately. However, the approximate test environment is as follows.
windows x64 Internal network at 1000 Mbps. Thread set to Sleep(1) Structure with 1mb buffer size Send file data over 2gb in quick succession
ex:
struct PT_FILE_TRANSFER { int filesize; int offset; char filename[DEF_FILE_NAME_LENGTH]; unsigned char buffer[DEF_FILE_BUFFER_SIZE]; };
while(1)
{
Sleep(1);
peer->Send(buffer, size, priority, RELIABLE_ORDERED, 1, guid, false);
}
It occurs randomly at high speed transmission without congestion. To catch an error, you have to test it repeatedly many times.
"I hereby declare I've read and agree to the current CLA as provided under https://github.com/SLikeSoft/SLikeNet/blob/master/.github/CONTRIBUTING.md"
@issue 1: I still can't follow that from a pure code review point of view. line 334: freedUpInProgressUsers.Insert(otherUser, _FILE_ANDLINE ); This adds the otherUser (pointer to the list for later cleanup) to the list of users to be freed pending connection attempts.
line 337: otherUser->DeleteConnectionAttempt(connectionAttempt); This deallocates the connection attempt which was processed (not the "otherUser")
line 345 / 584: connectionAttempt=user->connectionAttempts[i]; This references/accesses the otherUser which is still allocated, right?
Note that the otherUser will be deallocated upon the corresponding call to OnClosedConnection() for that "other" user eventually. If you think I'm still overlooking/missing something, pls let me know.
@issue 2: Thanks for the repro. Looking through our test cases I see we are not yet covering this exact scenario. Let me get back to you on that one. (Assigned internal case number: SLNET-234).
Also thanks for acknowledging the CLA. I added you to our record.
Sorry, The deallocation is line 340, not line 337. When connecting from local to local user == otherUser == connectionAttempt->sender == connectionAttempt->recipient
If the users[i] is deleted, the otherUser added at line 334 is also deleted.
NatPunchthroughServer.cpp
2.Integer Overflow This error occurs when approximately 2 GB of large data is transferred continuously.
I replaced it with 64-bit type as follows.
CCRakNetSlidingWindow.h
CCRakNetSlidingWindow.cpp
ReliabilityLayer.h
580 line
ReliabilityLayer.cpp
1959 line
This is a temporary solution and not a fundamental solution. I wish someone could fix it better.