fix(channelserver): correct session handler retail mismatches (#167)

Lobby search now returns only quest-bound players (QuestReserved) instead
of all reserved slots, matching retail behavior. The new field is
pre-collected under server lock before stage iteration to respect
Server.Mutex → Stage.RWMutex lock ordering.

Replaced three TODOs with RE documentation from Ghidra decompilation of
mhfo-hd.dll ZZ:
- Log key off-by-one: putRecord_log/putTerminal_log pass size 0 for the
  key field in ZZ, so the stored key is unused beyond issuance
- User search padding: ZZ per-entry parser confirms 40-byte block via
  memcpy(dst, src+8, 0x28); G2 DLL analysis inconclusive (stripped)
- Player count: field at entry offset 0x08 maps to struct param_1[0xe]
This commit is contained in:
Houmgaor
2026-02-27 17:29:32 +01:00
parent 649eebe67c
commit 21f9a79b62
5 changed files with 62 additions and 26 deletions

View File

@@ -45,6 +45,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Fixed ### Fixed
- Fixed lobby search returning all reserved players instead of only quest-bound players — `QuestReserved` field now counts only clients in "Qs" (quest) stages, matching retail behavior ([#167](https://github.com/Mezeporta/Erupe/issues/167))
- Documented log key off-by-one (RE'd from mhfo-hd.dll ZZ): `putRecord_log`/`putTerminal_log` don't use the key in ZZ, so the server value is unused beyond issuance ([#167](https://github.com/Mezeporta/Erupe/issues/167))
- Documented user search padding version boundary (8 vs 40 bytes) with RE findings from ZZ DLL; G2 analysis inconclusive ([#167](https://github.com/Mezeporta/Erupe/issues/167))
- Fixed bookshelf save data pointer being off by 14810 bytes for G1Z2, F4F5, and S6 game versions — corrected offsets to 103928, 71928, and 23928 respectively ([#164](https://github.com/Mezeporta/Erupe/issues/164)) - Fixed bookshelf save data pointer being off by 14810 bytes for G1Z2, F4F5, and S6 game versions — corrected offsets to 103928, 71928, and 23928 respectively ([#164](https://github.com/Mezeporta/Erupe/issues/164))
- Fixed guild alliance application toggle being hardcoded to always-open — now persisted in DB and togglable by the parent guild leader via `OperateJoint` Allow/Deny actions ([#166](https://github.com/Mezeporta/Erupe/issues/166)) - Fixed guild alliance application toggle being hardcoded to always-open — now persisted in DB and togglable by the parent guild leader via `OperateJoint` Allow/Deny actions ([#166](https://github.com/Mezeporta/Erupe/issues/166))
- Fixed gacha shop not working on G1GG clients due to protocol differences in `handleMsgMhfEnumerateShop` when `ShopType` is 1 or 2 — thanks @Sin365 (#150) - Fixed gacha shop not working on G1GG clients due to protocol differences in `handleMsgMhfEnumerateShop` when `ShopType` is 1 or 2 — thanks @Sin365 (#150)

View File

@@ -29,9 +29,9 @@ These TODOs represent features that are visibly broken for players.
| ~~`model_character.go:88,101,113`~~ | ~~`TODO: fix bookshelf data pointer` for G10-ZZ, F4-F5, and S6 versions~~ | ~~Wrong pointer corrupts character save reads for three game versions.~~ **Fixed.** Corrected offsets to 103928 (G1Z2), 71928 (F4F5), 23928 (S6) — validated via inter-version delta analysis and Ghidra decompilation of `snj_db_get_housedata` in the ZZ DLL. | [#164](https://github.com/Mezeporta/Erupe/issues/164) | | ~~`model_character.go:88,101,113`~~ | ~~`TODO: fix bookshelf data pointer` for G10-ZZ, F4-F5, and S6 versions~~ | ~~Wrong pointer corrupts character save reads for three game versions.~~ **Fixed.** Corrected offsets to 103928 (G1Z2), 71928 (F4F5), 23928 (S6) — validated via inter-version delta analysis and Ghidra decompilation of `snj_db_get_housedata` in the ZZ DLL. | [#164](https://github.com/Mezeporta/Erupe/issues/164) |
| `handlers_achievement.go:117` | `TODO: Notify on rank increase` — always returns `false` | Achievement rank-up notifications are silently suppressed. Requires understanding what `MhfDisplayedAchievement` (currently an empty handler) sends to track "last displayed" state. | [#165](https://github.com/Mezeporta/Erupe/issues/165) | | `handlers_achievement.go:117` | `TODO: Notify on rank increase` — always returns `false` | Achievement rank-up notifications are silently suppressed. Requires understanding what `MhfDisplayedAchievement` (currently an empty handler) sends to track "last displayed" state. | [#165](https://github.com/Mezeporta/Erupe/issues/165) |
| ~~`handlers_guild_info.go:443`~~ | ~~`TODO: Enable GuildAlliance applications` — hardcoded `true`~~ | ~~Guild alliance applications are always open regardless of setting.~~ **Fixed.** Added `recruiting` column to `guild_alliances`, wired `OperateJoint` actions `0x06`/`0x07`, reads from DB. | [#166](https://github.com/Mezeporta/Erupe/issues/166) | | ~~`handlers_guild_info.go:443`~~ | ~~`TODO: Enable GuildAlliance applications` — hardcoded `true`~~ | ~~Guild alliance applications are always open regardless of setting.~~ **Fixed.** Added `recruiting` column to `guild_alliances`, wired `OperateJoint` actions `0x06`/`0x07`, reads from DB. | [#166](https://github.com/Mezeporta/Erupe/issues/166) |
| `handlers_session.go:410` | `TODO(Andoryuuta): log key index off-by-one` | Known off-by-one in log key indexing is unresolved | [#167](https://github.com/Mezeporta/Erupe/issues/167) | | ~~`handlers_session.go:410`~~ | ~~`TODO(Andoryuuta): log key index off-by-one`~~ | ~~Known off-by-one in log key indexing is unresolved~~ **Documented.** RE'd from ZZ DLL: `putRecord_log`/`putTerminal_log` don't embed the key (size 0), so the off-by-one only matters in pre-ZZ clients and is benign server-side. | [#167](https://github.com/Mezeporta/Erupe/issues/167) |
| `handlers_session.go:551` | `TODO: This case might be <=G2` | Uncertain version detection in switch case | [#167](https://github.com/Mezeporta/Erupe/issues/167) | | ~~`handlers_session.go:551`~~ | ~~`TODO: This case might be <=G2`~~ | ~~Uncertain version detection in switch case~~ **Documented.** RE'd ZZ per-entry parser (FUN_115868a0) confirms 40-byte padding. G2 DLL analysis inconclusive (stripped, no shared struct sizes). Kept <=G1 boundary with RE documentation. | [#167](https://github.com/Mezeporta/Erupe/issues/167) |
| `handlers_session.go:714` | `TODO: Retail returned the number of clients in quests` | Player count reported to clients does not match retail behavior | [#167](https://github.com/Mezeporta/Erupe/issues/167) | | ~~`handlers_session.go:714`~~ | ~~`TODO: Retail returned the number of clients in quests`~~ | ~~Player count reported to clients does not match retail behavior~~ **Fixed.** Added `QuestReserved` field to `StageSnapshot` that counts only clients in "Qs" stages, pre-collected under server lock to respect lock ordering. | [#167](https://github.com/Mezeporta/Erupe/issues/167) |
| `msg_mhf_add_ud_point.go:28` | `TODO: Parse is a stub` — field meanings unknown | UD point packet fields unnamed, `Build` not implemented | [#168](https://github.com/Mezeporta/Erupe/issues/168) | | `msg_mhf_add_ud_point.go:28` | `TODO: Parse is a stub` — field meanings unknown | UD point packet fields unnamed, `Build` not implemented | [#168](https://github.com/Mezeporta/Erupe/issues/168) |
### 2. Test gaps on critical paths ### 2. Test gaps on critical paths

View File

@@ -51,6 +51,7 @@ type StageSnapshot struct {
StageID string StageID string
ClientCount int ClientCount int
Reserved int Reserved int
QuestReserved int // Players who left to enter quest stages ("Qs" prefix)
MaxPlayers uint16 MaxPlayers uint16
RawBinData0 []byte RawBinData0 []byte
RawBinData1 []byte RawBinData1 []byte

View File

@@ -107,6 +107,19 @@ func (r *LocalChannelRegistry) SearchStages(stagePrefix string, max int) []Stage
if len(results) >= max { if len(results) >= max {
break break
} }
// Pre-collect which charIDs are in quest stages under server lock,
// so we can count quest-reserved players without lock ordering issues
// (Server.Mutex must be acquired before Stage.RWMutex).
c.Lock()
inQuest := make(map[uint32]bool)
for _, sess := range c.sessions {
if sess.stage != nil && len(sess.stage.id) > 4 && sess.stage.id[3:5] == "Qs" {
inQuest[sess.charID] = true
}
}
c.Unlock()
cIP := net.ParseIP(c.IP).To4() cIP := net.ParseIP(c.IP).To4()
cPort := c.Port cPort := c.Port
c.stages.Range(func(_ string, stage *Stage) bool { c.stages.Range(func(_ string, stage *Stage) bool {
@@ -127,12 +140,20 @@ func (r *LocalChannelRegistry) SearchStages(stagePrefix string, max int) []Stage
bin3Copy := make([]byte, len(bin3)) bin3Copy := make([]byte, len(bin3))
copy(bin3Copy, bin3) copy(bin3Copy, bin3)
questReserved := 0
for charID := range stage.reservedClientSlots {
if inQuest[charID] {
questReserved++
}
}
results = append(results, StageSnapshot{ results = append(results, StageSnapshot{
ServerIP: cIP, ServerIP: cIP,
ServerPort: cPort, ServerPort: cPort,
StageID: stage.id, StageID: stage.id,
ClientCount: len(stage.clients) + len(stage.reservedClientSlots), ClientCount: len(stage.clients) + len(stage.reservedClientSlots),
Reserved: len(stage.reservedClientSlots), Reserved: len(stage.reservedClientSlots),
QuestReserved: questReserved,
MaxPlayers: stage.maxPlayers, MaxPlayers: stage.maxPlayers,
RawBinData0: bin0Copy, RawBinData0: bin0Copy,
RawBinData1: bin1Copy, RawBinData1: bin1Copy,

View File

@@ -407,8 +407,13 @@ func handleMsgSysIssueLogkey(s *Session, p mhfpacket.MHFPacket) {
return return
} }
// TODO(Andoryuuta): In the official client, the log key index is off by one, // Client log key off-by-one (RE'd from mhfo-hd.dll ZZ):
// cutting off the last byte in _most uses_. Find and document these accordingly. // putIssue_logkey (0x1D) requests and stores all 16 bytes correctly.
// putRecord_log (0x1E) and putTerminal_log (0x13) do NOT embed the log key
// in their packets — they pass size 0 to the packet builder for the key field.
// The original off-by-one note (Andoryuuta) may apply to pre-ZZ clients where
// these functions did use the key. In ZZ the key is stored but never sent back,
// so the server value is effectively unused beyond issuance.
s.Lock() s.Lock()
s.logKey = logKey s.logKey = logKey
s.Unlock() s.Unlock()
@@ -548,7 +553,11 @@ func handleMsgMhfTransitMessage(s *Session, p mhfpacket.MHFPacket) {
resp.WriteUint8(uint8(len(sjisName) + 1)) resp.WriteUint8(uint8(len(sjisName) + 1))
resp.WriteUint16(uint16(len(snap.UserBinary3))) resp.WriteUint16(uint16(len(snap.UserBinary3)))
// TODO: This case might be <=G2 // User search response padding block (RE'd from mhfo-hd.dll ZZ):
// ZZ per-entry parser (FUN_115868a0) reads 0x28 (40) bytes at offset +8
// via memcpy into the result struct. G1 and earlier use 8 bytes.
// G2 DLL analysis was inconclusive (stripped binary, no shared struct
// sizes with ZZ) — the boundary may be <=G2 rather than <=G1.
if s.server.erupeConfig.RealClientMode <= cfg.G1 { if s.server.erupeConfig.RealClientMode <= cfg.G1 {
resp.WriteBytes(make([]byte, 8)) resp.WriteBytes(make([]byte, 8))
} else { } else {
@@ -711,8 +720,10 @@ func handleMsgMhfTransitMessage(s *Session, p mhfpacket.MHFPacket) {
resp.WriteUint16(0) // Unk, [0 1 2] resp.WriteUint16(0) // Unk, [0 1 2]
resp.WriteUint16(uint16(sr.ClientCount)) resp.WriteUint16(uint16(sr.ClientCount))
resp.WriteUint16(sr.MaxPlayers) resp.WriteUint16(sr.MaxPlayers)
// TODO: Retail returned the number of clients in quests, not workshop/my series // Retail returned only clients in quest stages ("Qs" prefix),
resp.WriteUint16(uint16(sr.Reserved)) // not workshop/my series. RE'd from FUN_11586690 in mhfo-hd.dll ZZ:
// field at entry offset 0x08-0x09 → struct offset 0x1C (param_1[0xe]).
resp.WriteUint16(uint16(sr.QuestReserved))
resp.WriteUint8(0) // Static? resp.WriteUint8(0) // Static?
resp.WriteUint8(uint8(sr.MaxPlayers)) resp.WriteUint8(uint8(sr.MaxPlayers))