fix(savedata): guard against sub-minimum backup data in recovery

nullcomp's passthrough path returns non-cmp-header data as-is without
error, which is correct for old uncompressed saves. However, a corrupt
backup slot containing garbage shorter than the minimum save layout
(100 bytes) would pass Decompress() and then panic in
updateStructWithSaveData() with a slice-bounds error at the name field
read (offset 88–100).

Add a minSaveSize check after each backup decompresses; skip the slot
if the result is too small. Also document the campaign system and the
fix in CHANGELOG under Unreleased.
This commit is contained in:
Houmgaor
2026-03-20 11:46:01 +01:00
parent 47a07ec52c
commit 34f0e89e7b
2 changed files with 19 additions and 0 deletions

View File

@@ -114,6 +114,19 @@ func recoverFromBackups(s *Session, base *CharacterSaveData, charID uint32) (*Ch
continue
}
// nullcomp passes through data without a "cmp" header as-is (legitimate for
// old uncompressed saves). Guard against garbage data that is too small to
// contain the minimum save layout (name field at offset 88100).
const minSaveSize = saveFieldNameOffset + saveFieldNameLen
if len(candidate.decompSave) < minSaveSize {
s.logger.Warn("Backup slot data too small after decompression, skipping",
zap.Uint32("charID", charID),
zap.Int("slot", backup.Slot),
zap.Int("size", len(candidate.decompSave)),
)
continue
}
s.logger.Warn("Savedata recovered from backup — primary was corrupt",
zap.Uint32("charID", charID),
zap.Int("slot", backup.Slot),