mirror of
https://github.com/Mezeporta/Erupe.git
synced 2026-03-22 07:32:32 +01:00
refactor(channelserver): migrate inline queries to helpers and define named constants
Migrate 6 character data handlers to use the existing loadCharacterData and saveCharacterData helpers, eliminating duplicate inline SQL: - LoadFavoriteQuest, SaveFavoriteQuest, LoadDecoMyset, LoadMezfesData, LoadHunterNavi, GetEquipSkinHist Define named constants replacing magic numbers across handlers: - Achievement trophy tiers, broadcast/message types, diva phase durations, RP accrual rates, kill log layout, semaphore bases, quest stage/loading screen IDs Update anti-patterns doc with accurate line counts, evidence-based softlock analysis, and revised refactoring priorities.
This commit is contained in:
@@ -23,37 +23,90 @@
|
||||
|
||||
## 1. God Files — Massive Handler Files
|
||||
|
||||
The channel server has enormous files with thousands of lines, each mixing DB queries, business logic, binary serialization, and response writing with no layering.
|
||||
The channel server has large handler files, each mixing DB queries, business logic, binary serialization, and response writing with no layering. Actual line counts (non-test files):
|
||||
|
||||
| File | Approx. Lines | Purpose |
|
||||
|------|---------------|---------|
|
||||
| `server/channelserver/handlers_guild.go` | ~2000+ | Guild operations |
|
||||
| `server/channelserver/handlers_mail.go` | ~1200+ | Mail system |
|
||||
| `server/channelserver/handlers_data.go` | ~800+ | Data save/load |
|
||||
| `server/channelserver/handlers_cast_binary.go` | ~500+ | Binary relay |
|
||||
| `server/channelserver/sys_session.go` | ~500+ | Session lifecycle |
|
||||
| File | Lines | Purpose |
|
||||
|------|-------|---------|
|
||||
| `server/channelserver/handlers_session.go` | 794 | Session setup/teardown |
|
||||
| `server/channelserver/handlers_data_paper_tables.go` | 765 | Paper table data |
|
||||
| `server/channelserver/handlers_quest.go` | 722 | Quest lifecycle |
|
||||
| `server/channelserver/handlers_house.go` | 638 | Housing system |
|
||||
| `server/channelserver/handlers_festa.go` | 637 | Festival events |
|
||||
| `server/channelserver/handlers_data_paper.go` | 621 | Paper/data system |
|
||||
| `server/channelserver/handlers_tower.go` | 529 | Tower gameplay |
|
||||
| `server/channelserver/handlers_mercenary.go` | 495 | Mercenary system |
|
||||
| `server/channelserver/handlers_stage.go` | 492 | Stage/lobby management |
|
||||
| `server/channelserver/handlers_guild_info.go` | 473 | Guild info queries |
|
||||
|
||||
**Impact:** These files are difficult to navigate, review, and maintain. A change to guild mail logic requires working through a 2000-line file that also handles guild creation, management, and recruitment.
|
||||
These sizes (~500-800 lines) are not extreme by Go standards, but the files mix all architectural concerns. The bigger problem is the lack of layering within each file (see [#3](#3-no-architectural-layering--handlers-do-everything)), not the file sizes themselves.
|
||||
|
||||
**Impact:** Each handler function is a monolith mixing data access, business logic, and protocol serialization. Testing or reusing any single concern is impossible.
|
||||
|
||||
---
|
||||
|
||||
## 2. Silently Swallowed Errors
|
||||
## 2. Missing ACK Responses on Error Paths (Client Softlocks)
|
||||
|
||||
This is the most pervasive anti-pattern. The dominant error handling pattern across nearly every `handlers_*.go` file is:
|
||||
Some handler error paths log the error and return without sending any ACK response to the client. The MHF client uses `MsgSysAck` with an `ErrorCode` field (0 = success, 1 = failure) to complete request/response cycles. When no ACK is sent at all, the client softlocks waiting for a response that never arrives.
|
||||
|
||||
### The three error handling patterns in the codebase
|
||||
|
||||
**Pattern A — Silent return (the bug):** Error logged, no ACK sent, client hangs.
|
||||
|
||||
```go
|
||||
rows, err := s.Server.DB.Query(...)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to get ...", zap.Error(err))
|
||||
return // client gets no response, silently fails
|
||||
return // BUG: client gets no response, softlocks
|
||||
}
|
||||
```
|
||||
|
||||
Errors are logged server-side but the client receives no error response. The client is left hanging or receives incomplete data with no indication of failure.
|
||||
**Pattern B — Log and continue (acceptable):** Error logged, handler continues and sends a success ACK with default/empty data. The client proceeds with fallback behavior.
|
||||
|
||||
**Impact:** Client-side debugging is extremely difficult. Players experience mysterious failures with no feedback. Error recovery is impossible since the client doesn't know something went wrong.
|
||||
```go
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to load mezfes data", zap.Error(err))
|
||||
}
|
||||
// Falls through to doAckBufSucceed with empty data
|
||||
```
|
||||
|
||||
**Recommendation:** Define error response packets or at least send a generic failure response to the client before returning.
|
||||
**Pattern C — Fail ACK (correct):** Error logged, explicit fail ACK sent. The client shows an appropriate error dialog and stays connected.
|
||||
|
||||
```go
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to read rengoku_data.bin", zap.Error(err))
|
||||
doAckBufFail(s, pkt.AckHandle, nil)
|
||||
return
|
||||
}
|
||||
```
|
||||
|
||||
### Evidence that fail ACKs are safe
|
||||
|
||||
The codebase already sends ~70 `doAckSimpleFail`/`doAckBufFail` calls in production handler code across 15 files. The client handles them gracefully in all observed cases:
|
||||
|
||||
| File | Fail ACKs | Client behavior |
|
||||
|------|-----------|-----------------|
|
||||
| `handlers_guild_scout.go` | 17 | Guild recruitment error dialogs |
|
||||
| `handlers_guild_ops.go` | 10 | Permission denied, guild not found dialogs |
|
||||
| `handlers_stage.go` | 8 | "Room is full", "wrong password", "stage locked" |
|
||||
| `handlers_house.go` | 6 | Wrong password, invalid box index |
|
||||
| `handlers_guild.go` | 9 | Guild icon update errors, unimplemented features |
|
||||
| `handlers_guild_alliance.go` | 4 | Alliance permission errors |
|
||||
| `handlers_data.go` | 4 | Decompression failures, oversized payloads |
|
||||
| `handlers_festa.go` | 4 | Festival entry errors |
|
||||
| `handlers_quest.go` | 3 | Missing quest/scenario files |
|
||||
|
||||
A comment in `handlers_quest.go:188` explicitly documents the mechanism:
|
||||
|
||||
> sends doAckBufFail, which triggers the client's error dialog (snj_questd_matching_fail → SetDialogData) instead of a softlock
|
||||
|
||||
The original `mhfo-hd.dll` client reads the `ErrorCode` byte from `MsgSysAck` and dispatches to per-message error UI. A fail ACK causes the client to show an error dialog and remain functional. A missing ACK causes a softlock.
|
||||
|
||||
### Scope
|
||||
|
||||
A preliminary grep for `logger.Error` followed by bare `return` (no doAck call) found instances across ~25 handler files. The worst offenders are `handlers_festa.go`, `handlers_gacha.go`, `handlers_cafe.go`, and `handlers_house.go`. However, many of these are Pattern B (log-and-continue), not Pattern A. Each instance needs individual review to determine whether an ACK is already sent further down the function.
|
||||
|
||||
**Impact:** Players experience softlocks on error paths that could instead show an error dialog and let them continue playing.
|
||||
|
||||
**Recommendation:** Audit each silent-return error path. For handlers where the packet has an `AckHandle` and no ACK is sent on the error path, add `doAckSimpleFail`/`doAckBufFail` matching the ACK type used on the success path. This matches the existing pattern used in ~70 other error paths across the codebase.
|
||||
|
||||
---
|
||||
|
||||
@@ -300,9 +353,9 @@ Database operations use raw `database/sql` with PostgreSQL-specific syntax throu
|
||||
|
||||
| Severity | Anti-patterns |
|
||||
|----------|--------------|
|
||||
| **High** | No architectural layering (#3), silently swallowed errors (#2), god files (#1), tight DB coupling (#13) |
|
||||
| **Medium** | Magic numbers (#4), inconsistent binary I/O (#5), Session god object (#6), copy-paste handlers (#8) |
|
||||
| **Low** | `init()` registration (#10), inconsistent logging (#12), mutex granularity (#7), panic-based flow (#11) |
|
||||
| **High** | Missing ACK responses / softlocks (#2), no architectural layering (#3), tight DB coupling (#13) |
|
||||
| **Medium** | Magic numbers (#4), inconsistent binary I/O (#5), Session god object (#6), copy-paste handlers (#8), raw SQL duplication (#9) |
|
||||
| **Low** | God files (#1), `init()` registration (#10), inconsistent logging (#12), mutex granularity (#7), panic-based flow (#11) |
|
||||
|
||||
### Root Cause
|
||||
|
||||
@@ -310,8 +363,9 @@ Most of these anti-patterns stem from a single root cause: **the codebase grew o
|
||||
|
||||
### Recommended Refactoring Priority
|
||||
|
||||
1. **Introduce error responses to clients** — highest user-facing impact, can be done incrementally
|
||||
2. **Extract a repository layer** — decouple SQL from handlers, enable testing
|
||||
3. **Define protocol constants** — replace magic numbers, improve documentation
|
||||
4. **Standardize binary I/O** — pick one approach, migrate the rest
|
||||
5. **Split god files** — break handlers into sub-packages by domain (guild/, mail/, quest/)
|
||||
1. **Add fail ACKs to silent error paths** — prevents player softlocks, ~70 existing doAckFail calls prove safety, low risk, can be done handler-by-handler
|
||||
2. **Extract a character repository layer** — 152 queries across 26 files touch the `characters` table, highest SQL duplication
|
||||
3. **Extract load/save helpers** — 38 handlers repeat the same ~10-15 line template, mechanical extraction
|
||||
4. **Extract a guild repository layer** — 32 queries across 8-15 files, second-highest SQL duplication
|
||||
5. **Define protocol constants** — 1,052 hex literals with 174 unique values, improves documentation
|
||||
6. **Standardize binary I/O** — pick `byteframe` (already dominant), migrate remaining `binary.Write` and raw slice code
|
||||
|
||||
62
server/channelserver/constants_quest.go
Normal file
62
server/channelserver/constants_quest.go
Normal file
@@ -0,0 +1,62 @@
|
||||
package channelserver
|
||||
|
||||
// Raviente quest type codes
|
||||
const (
|
||||
QuestTypeSpecialTool = uint8(9)
|
||||
QuestTypeRegularRaviente = uint8(16)
|
||||
QuestTypeViolentRaviente = uint8(22)
|
||||
QuestTypeBerserkRaviente = uint8(40)
|
||||
QuestTypeExtremeRaviente = uint8(50)
|
||||
QuestTypeSmallBerserkRavi = uint8(51)
|
||||
)
|
||||
|
||||
// Event quest binary frame offsets
|
||||
const (
|
||||
questFrameTimeFlagOffset = 25
|
||||
questFrameVariant3Offset = 175
|
||||
)
|
||||
|
||||
// Quest body lengths per game version
|
||||
const (
|
||||
questBodyLenS6 = 160
|
||||
questBodyLenF5 = 168
|
||||
questBodyLenG101 = 192
|
||||
questBodyLenZ1 = 224
|
||||
questBodyLenZZ = 320
|
||||
)
|
||||
|
||||
// BackportQuest constants
|
||||
const (
|
||||
questRewardTableBase = uint32(96)
|
||||
questStringPointerOff = 40
|
||||
questStringTablePadding = 32
|
||||
questStringCount = 8
|
||||
)
|
||||
|
||||
// BackportQuest fill lengths per version
|
||||
const (
|
||||
questBackportFillS6 = uint32(44)
|
||||
questBackportFillF5 = uint32(52)
|
||||
questBackportFillG101 = uint32(76)
|
||||
questBackportFillZZ = uint32(108)
|
||||
)
|
||||
|
||||
// Tune value count limits per game version
|
||||
const (
|
||||
tuneLimitG1 = 256
|
||||
tuneLimitG3 = 283
|
||||
tuneLimitGG = 315
|
||||
tuneLimitG61 = 332
|
||||
tuneLimitG7 = 339
|
||||
tuneLimitG81 = 396
|
||||
tuneLimitG91 = 694
|
||||
tuneLimitG101 = 704
|
||||
tuneLimitZ2 = 750
|
||||
tuneLimitZZ = 770
|
||||
)
|
||||
|
||||
// Event quest data size bounds
|
||||
const (
|
||||
questDataMaxLen = 896
|
||||
questDataMinLen = 352
|
||||
)
|
||||
@@ -9,6 +9,13 @@ import (
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
// Achievement trophy tier thresholds (bitfield values)
|
||||
const (
|
||||
AchievementTrophyBronze = uint8(0x40)
|
||||
AchievementTrophySilver = uint8(0x60)
|
||||
AchievementTrophyGold = uint8(0x7F)
|
||||
)
|
||||
|
||||
var achievementCurves = [][]int32{
|
||||
// 0: HR weapon use, Class use, Tore dailies
|
||||
{5, 15, 30, 50, 100, 150, 200, 300},
|
||||
@@ -61,10 +68,10 @@ func GetAchData(id uint8, score int32) Achievement {
|
||||
ach.NextValue = 15
|
||||
case 6:
|
||||
ach.NextValue = 15
|
||||
ach.Trophy = 0x40
|
||||
ach.Trophy = AchievementTrophyBronze
|
||||
case 7:
|
||||
ach.NextValue = 20
|
||||
ach.Trophy = 0x60
|
||||
ach.Trophy = AchievementTrophySilver
|
||||
}
|
||||
return ach
|
||||
} else {
|
||||
@@ -83,7 +90,7 @@ func GetAchData(id uint8, score int32) Achievement {
|
||||
}
|
||||
}
|
||||
ach.Required = uint32(curve[7])
|
||||
ach.Trophy = 0x7F
|
||||
ach.Trophy = AchievementTrophyGold
|
||||
ach.Progress = ach.Required
|
||||
return ach
|
||||
}
|
||||
|
||||
@@ -34,7 +34,7 @@ func handleMsgSysCastBinary(s *Session, p mhfpacket.MHFPacket) {
|
||||
pkt := p.(*mhfpacket.MsgSysCastBinary)
|
||||
tmp := byteframe.NewByteFrameFromBytes(pkt.RawDataPayload)
|
||||
|
||||
if pkt.BroadcastType == 0x03 && pkt.MessageType == 0x03 && len(pkt.RawDataPayload) == 0x10 {
|
||||
if pkt.BroadcastType == BroadcastTypeStage && pkt.MessageType == BinaryMessageTypeData && len(pkt.RawDataPayload) == 0x10 {
|
||||
if tmp.ReadUint16() == 0x0002 && tmp.ReadUint8() == 0x18 {
|
||||
var timer bool
|
||||
if err := s.server.db.QueryRow(`SELECT COALESCE(timer, false) FROM users u WHERE u.id=(SELECT c.user_id FROM characters c WHERE c.id=$1)`, s.charID).Scan(&timer); err != nil {
|
||||
@@ -50,7 +50,7 @@ func handleMsgSysCastBinary(s *Session, p mhfpacket.MHFPacket) {
|
||||
}
|
||||
|
||||
if s.server.erupeConfig.DebugOptions.QuestTools {
|
||||
if pkt.BroadcastType == 0x03 && pkt.MessageType == 0x02 && len(pkt.RawDataPayload) > 32 {
|
||||
if pkt.BroadcastType == BroadcastTypeStage && pkt.MessageType == BinaryMessageTypeQuest && len(pkt.RawDataPayload) > 32 {
|
||||
// This is only correct most of the time
|
||||
tmp.ReadBytes(20)
|
||||
tmp.SetLE()
|
||||
@@ -131,7 +131,7 @@ func handleMsgSysCastBinary(s *Session, p mhfpacket.MHFPacket) {
|
||||
s.stage.BroadcastMHF(resp, s)
|
||||
}
|
||||
case BroadcastTypeServer:
|
||||
if pkt.MessageType == 1 {
|
||||
if pkt.MessageType == BinaryMessageTypeChat {
|
||||
raviSema := s.server.getRaviSemaphore()
|
||||
if raviSema != nil {
|
||||
raviSema.BroadcastMHF(resp, s)
|
||||
|
||||
@@ -11,6 +11,14 @@ import (
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
// Diva Defense event duration constants (all values in seconds)
|
||||
const (
|
||||
divaPhaseDuration = 601200 // 6d 23h = first song phase
|
||||
divaInterlude = 3900 // 65 min = gap between phases
|
||||
divaWeekDuration = 604800 // 7 days = subsequent phase length
|
||||
divaTotalLifespan = 2977200 // ~34.5 days = full event window
|
||||
)
|
||||
|
||||
func cleanupDiva(s *Session) {
|
||||
if _, err := s.server.db.Exec("DELETE FROM events WHERE event_type='diva'"); err != nil {
|
||||
s.logger.Error("Failed to delete diva events", zap.Error(err))
|
||||
@@ -25,29 +33,29 @@ func generateDivaTimestamps(s *Session, start uint32, debug bool) []uint32 {
|
||||
switch start {
|
||||
case 1:
|
||||
timestamps[0] = midnight
|
||||
timestamps[1] = timestamps[0] + 601200
|
||||
timestamps[2] = timestamps[1] + 3900
|
||||
timestamps[3] = timestamps[1] + 604800
|
||||
timestamps[4] = timestamps[3] + 3900
|
||||
timestamps[5] = timestamps[3] + 604800
|
||||
timestamps[1] = timestamps[0] + divaPhaseDuration
|
||||
timestamps[2] = timestamps[1] + divaInterlude
|
||||
timestamps[3] = timestamps[1] + divaWeekDuration
|
||||
timestamps[4] = timestamps[3] + divaInterlude
|
||||
timestamps[5] = timestamps[3] + divaWeekDuration
|
||||
case 2:
|
||||
timestamps[0] = midnight - 605100
|
||||
timestamps[1] = midnight - 3900
|
||||
timestamps[0] = midnight - (divaPhaseDuration + divaInterlude)
|
||||
timestamps[1] = midnight - divaInterlude
|
||||
timestamps[2] = midnight
|
||||
timestamps[3] = timestamps[1] + 604800
|
||||
timestamps[4] = timestamps[3] + 3900
|
||||
timestamps[5] = timestamps[3] + 604800
|
||||
timestamps[3] = timestamps[1] + divaWeekDuration
|
||||
timestamps[4] = timestamps[3] + divaInterlude
|
||||
timestamps[5] = timestamps[3] + divaWeekDuration
|
||||
case 3:
|
||||
timestamps[0] = midnight - 1213800
|
||||
timestamps[1] = midnight - 608700
|
||||
timestamps[2] = midnight - 604800
|
||||
timestamps[3] = midnight - 3900
|
||||
timestamps[0] = midnight - (divaPhaseDuration + divaInterlude + divaWeekDuration + divaInterlude)
|
||||
timestamps[1] = midnight - (divaWeekDuration + divaInterlude)
|
||||
timestamps[2] = midnight - divaWeekDuration
|
||||
timestamps[3] = midnight - divaInterlude
|
||||
timestamps[4] = midnight
|
||||
timestamps[5] = timestamps[3] + 604800
|
||||
timestamps[5] = timestamps[3] + divaWeekDuration
|
||||
}
|
||||
return timestamps
|
||||
}
|
||||
if start == 0 || TimeAdjusted().Unix() > int64(start)+2977200 {
|
||||
if start == 0 || TimeAdjusted().Unix() > int64(start)+divaTotalLifespan {
|
||||
cleanupDiva(s)
|
||||
// Generate a new diva defense, starting midnight tomorrow
|
||||
start = uint32(midnight.Add(24 * time.Hour).Unix())
|
||||
@@ -56,11 +64,11 @@ func generateDivaTimestamps(s *Session, start uint32, debug bool) []uint32 {
|
||||
}
|
||||
}
|
||||
timestamps[0] = start
|
||||
timestamps[1] = timestamps[0] + 601200
|
||||
timestamps[2] = timestamps[1] + 3900
|
||||
timestamps[3] = timestamps[1] + 604800
|
||||
timestamps[4] = timestamps[3] + 3900
|
||||
timestamps[5] = timestamps[3] + 604800
|
||||
timestamps[1] = timestamps[0] + divaPhaseDuration
|
||||
timestamps[2] = timestamps[1] + divaInterlude
|
||||
timestamps[3] = timestamps[1] + divaWeekDuration
|
||||
timestamps[4] = timestamps[3] + divaInterlude
|
||||
timestamps[5] = timestamps[3] + divaWeekDuration
|
||||
return timestamps
|
||||
}
|
||||
|
||||
|
||||
@@ -22,21 +22,8 @@ func handleMsgMhfSaveMezfesData(s *Session, p mhfpacket.MHFPacket) {
|
||||
|
||||
func handleMsgMhfLoadMezfesData(s *Session, p mhfpacket.MHFPacket) {
|
||||
pkt := p.(*mhfpacket.MsgMhfLoadMezfesData)
|
||||
var data []byte
|
||||
if err := s.server.db.QueryRow(`SELECT mezfes FROM characters WHERE id=$1`, s.charID).Scan(&data); err != nil && !errors.Is(err, sql.ErrNoRows) {
|
||||
s.logger.Error("Failed to load mezfes data", zap.Error(err))
|
||||
}
|
||||
bf := byteframe.NewByteFrame()
|
||||
if len(data) > 0 {
|
||||
bf.WriteBytes(data)
|
||||
} else {
|
||||
bf.WriteUint32(0)
|
||||
bf.WriteUint8(2)
|
||||
bf.WriteUint32(0)
|
||||
bf.WriteUint32(0)
|
||||
bf.WriteUint32(0)
|
||||
}
|
||||
doAckBufSucceed(s, pkt.AckHandle, bf.Data())
|
||||
loadCharacterData(s, pkt.AckHandle, "mezfes",
|
||||
[]byte{0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00})
|
||||
}
|
||||
|
||||
func handleMsgMhfEnumerateRanking(s *Session, p mhfpacket.MHFPacket) {
|
||||
|
||||
@@ -271,18 +271,11 @@ func handleMsgMhfUpdateMyhouseInfo(s *Session, p mhfpacket.MHFPacket) {
|
||||
|
||||
func handleMsgMhfLoadDecoMyset(s *Session, p mhfpacket.MHFPacket) {
|
||||
pkt := p.(*mhfpacket.MsgMhfLoadDecoMyset)
|
||||
var data []byte
|
||||
err := s.server.db.QueryRow("SELECT decomyset FROM characters WHERE id = $1", s.charID).Scan(&data)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to load decomyset", zap.Error(err))
|
||||
}
|
||||
if len(data) == 0 {
|
||||
data = []byte{0x01, 0x00}
|
||||
defaultData := []byte{0x01, 0x00}
|
||||
if s.server.erupeConfig.RealClientMode < _config.G10 {
|
||||
data = []byte{0x00, 0x00}
|
||||
defaultData = []byte{0x00, 0x00}
|
||||
}
|
||||
}
|
||||
doAckBufSucceed(s, pkt.AckHandle, data)
|
||||
loadCharacterData(s, pkt.AckHandle, "decomyset", defaultData)
|
||||
}
|
||||
|
||||
func handleMsgMhfSaveDecoMyset(s *Session, p mhfpacket.MHFPacket) {
|
||||
|
||||
@@ -47,13 +47,7 @@ func handleMsgMhfLoadHunterNavi(s *Session, p mhfpacket.MHFPacket) {
|
||||
if s.server.erupeConfig.RealClientMode <= _config.G7 {
|
||||
naviLength = 280
|
||||
}
|
||||
var data []byte
|
||||
err := s.server.db.QueryRow("SELECT hunternavi FROM characters WHERE id = $1", s.charID).Scan(&data)
|
||||
if len(data) == 0 {
|
||||
s.logger.Error("Failed to load hunternavi", zap.Error(err))
|
||||
data = make([]byte, naviLength)
|
||||
}
|
||||
doAckBufSucceed(s, pkt.AckHandle, data)
|
||||
loadCharacterData(s, pkt.AckHandle, "hunternavi", make([]byte, naviLength))
|
||||
}
|
||||
|
||||
func handleMsgMhfSaveHunterNavi(s *Session, p mhfpacket.MHFPacket) {
|
||||
|
||||
@@ -170,13 +170,7 @@ func equipSkinHistSize(mode _config.Mode) int {
|
||||
func handleMsgMhfGetEquipSkinHist(s *Session, p mhfpacket.MHFPacket) {
|
||||
pkt := p.(*mhfpacket.MsgMhfGetEquipSkinHist)
|
||||
size := equipSkinHistSize(s.server.erupeConfig.RealClientMode)
|
||||
var data []byte
|
||||
err := s.server.db.QueryRow("SELECT COALESCE(skin_hist::bytea, $2::bytea) FROM characters WHERE id = $1", s.charID, make([]byte, size)).Scan(&data)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to load skin_hist", zap.Error(err))
|
||||
data = make([]byte, size)
|
||||
}
|
||||
doAckBufSucceed(s, pkt.AckHandle, data)
|
||||
loadCharacterData(s, pkt.AckHandle, "skin_hist", make([]byte, size))
|
||||
}
|
||||
|
||||
func handleMsgMhfUpdateEquipSkinHist(s *Session, p mhfpacket.MHFPacket) {
|
||||
|
||||
@@ -50,7 +50,7 @@ func equal(a, b []byte) bool {
|
||||
|
||||
// BackportQuest converts a quest binary to an older format.
|
||||
func BackportQuest(data []byte, mode _config.Mode) []byte {
|
||||
wp := binary.LittleEndian.Uint32(data[0:4]) + 96
|
||||
wp := binary.LittleEndian.Uint32(data[0:4]) + questRewardTableBase
|
||||
rp := wp + 4
|
||||
for i := uint32(0); i < 6; i++ {
|
||||
if i != 0 {
|
||||
@@ -60,13 +60,13 @@ func BackportQuest(data []byte, mode _config.Mode) []byte {
|
||||
copy(data[wp:wp+4], data[rp:rp+4])
|
||||
}
|
||||
|
||||
fillLength := uint32(108)
|
||||
fillLength := questBackportFillZZ
|
||||
if mode <= _config.S6 {
|
||||
fillLength = 44
|
||||
fillLength = questBackportFillS6
|
||||
} else if mode <= _config.F5 {
|
||||
fillLength = 52
|
||||
fillLength = questBackportFillF5
|
||||
} else if mode <= _config.G101 {
|
||||
fillLength = 76
|
||||
fillLength = questBackportFillG101
|
||||
}
|
||||
|
||||
copy(data[wp:wp+fillLength], data[rp:rp+fillLength])
|
||||
@@ -195,27 +195,13 @@ func seasonConversion(s *Session, questFile string) string {
|
||||
|
||||
func handleMsgMhfLoadFavoriteQuest(s *Session, p mhfpacket.MHFPacket) {
|
||||
pkt := p.(*mhfpacket.MsgMhfLoadFavoriteQuest)
|
||||
var data []byte
|
||||
err := s.server.db.QueryRow("SELECT savefavoritequest FROM characters WHERE id = $1", s.charID).Scan(&data)
|
||||
if err == nil && len(data) > 0 {
|
||||
doAckBufSucceed(s, pkt.AckHandle, data)
|
||||
} else {
|
||||
doAckBufSucceed(s, pkt.AckHandle, []byte{0x01, 0x00, 0x01, 0x00, 0x01, 0x00, 0x01, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00})
|
||||
}
|
||||
loadCharacterData(s, pkt.AckHandle, "savefavoritequest",
|
||||
[]byte{0x01, 0x00, 0x01, 0x00, 0x01, 0x00, 0x01, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00})
|
||||
}
|
||||
|
||||
func handleMsgMhfSaveFavoriteQuest(s *Session, p mhfpacket.MHFPacket) {
|
||||
pkt := p.(*mhfpacket.MsgMhfSaveFavoriteQuest)
|
||||
if len(pkt.Data) > 65536 {
|
||||
s.logger.Warn("FavoriteQuest payload too large", zap.Int("len", len(pkt.Data)))
|
||||
doAckSimpleSucceed(s, pkt.AckHandle, make([]byte, 4))
|
||||
return
|
||||
}
|
||||
dumpSaveData(s, pkt.Data, "favquest")
|
||||
if _, err := s.server.db.Exec("UPDATE characters SET savefavoritequest=$1 WHERE id=$2", pkt.Data, s.charID); err != nil {
|
||||
s.logger.Error("Failed to save favorite quest", zap.Error(err))
|
||||
}
|
||||
doAckSimpleSucceed(s, pkt.AckHandle, []byte{0x00, 0x00, 0x00, 0x00})
|
||||
saveCharacterData(s, pkt.AckHandle, "savefavoritequest", pkt.Data, 65536)
|
||||
}
|
||||
|
||||
func loadQuestFile(s *Session, questId int) []byte {
|
||||
|
||||
@@ -237,14 +237,14 @@ func logoutPlayer(s *Session) {
|
||||
timePlayed += sessionTime
|
||||
|
||||
if mhfcourse.CourseExists(30, s.courses) {
|
||||
rpGained = timePlayed / 900
|
||||
timePlayed = timePlayed % 900
|
||||
rpGained = timePlayed / rpAccrualCafe
|
||||
timePlayed = timePlayed % rpAccrualCafe
|
||||
if _, err := s.server.db.Exec("UPDATE characters SET cafe_time=cafe_time+$1 WHERE id=$2", sessionTime, s.charID); err != nil {
|
||||
s.logger.Error("Failed to update cafe time", zap.Error(err))
|
||||
}
|
||||
} else {
|
||||
rpGained = timePlayed / 1800
|
||||
timePlayed = timePlayed % 1800
|
||||
rpGained = timePlayed / rpAccrualNormal
|
||||
timePlayed = timePlayed % rpAccrualNormal
|
||||
}
|
||||
|
||||
s.logger.Debug("Session metrics calculated",
|
||||
@@ -386,13 +386,25 @@ func handleMsgSysIssueLogkey(s *Session, p mhfpacket.MHFPacket) {
|
||||
doAckBufSucceed(s, pkt.AckHandle, resp.Data())
|
||||
}
|
||||
|
||||
// Kill log binary layout constants
|
||||
const (
|
||||
killLogHeaderSize = 32 // bytes before monster kill count array
|
||||
killLogMonsterCount = 176 // monster table entries
|
||||
)
|
||||
|
||||
// RP accrual rate constants (seconds per RP point)
|
||||
const (
|
||||
rpAccrualNormal = 1800 // 30 min per RP without cafe
|
||||
rpAccrualCafe = 900 // 15 min per RP with cafe course
|
||||
)
|
||||
|
||||
func handleMsgSysRecordLog(s *Session, p mhfpacket.MHFPacket) {
|
||||
pkt := p.(*mhfpacket.MsgSysRecordLog)
|
||||
if s.server.erupeConfig.RealClientMode == _config.ZZ {
|
||||
bf := byteframe.NewByteFrameFromBytes(pkt.Data)
|
||||
_, _ = bf.Seek(32, 0)
|
||||
_, _ = bf.Seek(killLogHeaderSize, 0)
|
||||
var val uint8
|
||||
for i := 0; i < 176; i++ {
|
||||
for i := 0; i < killLogMonsterCount; i++ {
|
||||
val = bf.ReadUint8()
|
||||
if val > 0 && mhfmon.Monsters[i].Large {
|
||||
if _, err := s.server.db.Exec(`INSERT INTO kill_logs (character_id, monster, quantity, timestamp) VALUES ($1, $2, $3, $4)`, s.charID, i, val, TimeAdjusted()); err != nil {
|
||||
|
||||
@@ -327,12 +327,18 @@ func (s *Session) getObjectId() uint32 {
|
||||
return uint32(s.objectID)<<16 | uint32(s.objectIndex)
|
||||
}
|
||||
|
||||
// Semaphore ID base values
|
||||
const (
|
||||
semaphoreBaseDefault = uint32(0x000F0000)
|
||||
semaphoreBaseAlt = uint32(0x000E0000)
|
||||
)
|
||||
|
||||
// GetSemaphoreID returns the semaphore ID held by the session, varying by semaphore mode.
|
||||
func (s *Session) GetSemaphoreID() uint32 {
|
||||
if s.semaphoreMode {
|
||||
return 0x000E0000 + uint32(s.semaphoreID[1])
|
||||
return semaphoreBaseAlt + uint32(s.semaphoreID[1])
|
||||
} else {
|
||||
return 0x000F0000 + uint32(s.semaphoreID[0])
|
||||
return semaphoreBaseDefault + uint32(s.semaphoreID[0])
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user