refactor(channelserver): remove Channels fallbacks, use Registry as sole cross-channel API

main.go always sets both Channels and Registry together, making the
Channels fallback paths dead code. This removes:

- Server.Channels field from the Server struct
- 3 if/else fallback blocks in handlers_session.go (replaced with
  Registry.FindChannelForStage, SearchSessions, SearchStages)
- 1 if/else fallback block in handlers_guild_ops.go (replaced with
  Registry.NotifyMailToCharID)
- 3 method fallbacks in sys_channel_server.go (WorldcastMHF,
  FindSessionByCharID, DisconnectUser now delegate directly)

Updates anti-patterns.md #6 to "accepted design" — Session struct is
appropriate for this game server's handler pattern, and cross-channel
coupling is now fully routed through the ChannelRegistry interface.
This commit is contained in:
Houmgaor
2026-02-22 16:16:44 +01:00
parent cd630a7a58
commit 53b5bb3b96
11 changed files with 113 additions and 252 deletions

View File

@@ -176,24 +176,17 @@ Pattern C (raw `data[i] = byte(...)` serialization) does not exist in production
--- ---
## 6. Session Struct is a God Object ## 6. ~~Session Struct is a God Object~~ (Accepted Design)
`sys_session.go` defines a `Session` struct that carries everything a handler could possibly need: `sys_session.go` defines a `Session` struct (~30 fields) that every handler receives. After analysis, this is accepted as appropriate design for this codebase:
- Database connection (`*sql.DB`) - **Field clustering is natural:** The ~30 fields cluster into 7 groups (transport, identity, stage, semaphore, gameplay, mail, debug). Transport fields (`rawConn`, `cryptConn`, `sendPackets`) are only used by `sys_session.go` — already isolated. Stage, semaphore, and mail fields are each used by 1-5 dedicated handlers.
- Logger - **Core identity is pervasive:** `charID` is used by 38 handlers — it's the core identity field. Extracting it adds indirection for zero benefit.
- Server reference (which itself contains more shared state) - **`s.server` coupling is genuine:** Handlers need 2-5 repos + config + broadcast, so narrower interfaces would mirror the full server without meaningful decoupling.
- Character state (ID, name, stats) - **Cross-channel operations use `Registry`:** The `Channels []*Server` field has been removed. All cross-channel operations (worldcast, session lookup, disconnect, stage search, mail notification) now go exclusively through the `ChannelRegistry` interface, removing the last direct inter-server coupling.
- Stage/lobby state - **Standard game server pattern:** For a game server emulator with the `func(s *Session, p MHFPacket)` handler pattern, Session carrying identity + server reference is standard design.
- Semaphore state
- Send channels
- Various flags and locks
Every handler receives this god object, coupling all handlers to the entire server's internal state. **Status:** Accepted design. The `Channels` field was removed and all cross-channel operations are routed through `ChannelRegistry`. No further refactoring planned.
**Impact:** Any handler can modify any part of the session or server state. There's no encapsulation. Testing requires constructing a fully populated Session with all dependencies. It's unclear which fields a given handler actually needs.
**Recommendation:** Pass narrower interfaces to handlers (e.g., a `DBQuerier` interface instead of the full server, a `ResponseWriter` instead of the raw send channel).
--- ---
@@ -300,7 +293,7 @@ The codebase mixes logging approaches:
| Severity | Anti-patterns | | Severity | Anti-patterns |
|----------|--------------| |----------|--------------|
| **High** | ~~Missing ACK responses / softlocks (#2)~~ **Fixed**, no architectural layering (#3), ~~tight DB coupling (#13)~~ **Fixed** (21 interfaces + mocks) | | **High** | ~~Missing ACK responses / softlocks (#2)~~ **Fixed**, no architectural layering (#3), ~~tight DB coupling (#13)~~ **Fixed** (21 interfaces + mocks) |
| **Medium** | ~~Magic numbers (#4)~~ **Fixed**, ~~inconsistent binary I/O (#5)~~ **Resolved**, Session god object (#6), ~~copy-paste handlers (#8)~~ **Fixed**, ~~raw SQL duplication (#9)~~ **Complete** (21 repos, 0 inline queries remain) | | **Medium** | ~~Magic numbers (#4)~~ **Fixed**, ~~inconsistent binary I/O (#5)~~ **Resolved**, ~~Session god object (#6)~~ **Accepted design** (Channels removed, Registry-only), ~~copy-paste handlers (#8)~~ **Fixed**, ~~raw SQL duplication (#9)~~ **Complete** (21 repos, 0 inline queries remain) |
| **Low** | God files (#1), ~~`init()` registration (#10)~~ **Fixed**, ~~inconsistent logging (#12)~~ **Fixed**, ~~mutex granularity (#7)~~ **Partially fixed** (stage map done, Raviente unchanged), ~~panic-based flow (#11)~~ **Fixed** | | **Low** | God files (#1), ~~`init()` registration (#10)~~ **Fixed**, ~~inconsistent logging (#12)~~ **Fixed**, ~~mutex granularity (#7)~~ **Partially fixed** (stage map done, Raviente unchanged), ~~panic-based flow (#11)~~ **Fixed** |
### Root Cause ### Root Cause

View File

@@ -291,7 +291,6 @@ func main() {
registry := channelserver.NewLocalChannelRegistry(channels) registry := channelserver.NewLocalChannelRegistry(channels)
for _, c := range channels { for _, c := range channels {
c.Channels = channels
c.Registry = registry c.Registry = registry
} }
} }

View File

@@ -572,7 +572,7 @@ func TestHandleMsgSysAuthTerminal(t *testing.T) {
func TestHandleMsgSysLockGlobalSema_NoMatch(t *testing.T) { func TestHandleMsgSysLockGlobalSema_NoMatch(t *testing.T) {
server := createMockServer() server := createMockServer()
server.GlobalID = "test-server" server.GlobalID = "test-server"
server.Channels = []*Server{} server.Registry = NewLocalChannelRegistry([]*Server{})
session := createMockSession(1, server) session := createMockSession(1, server)
pkt := &mhfpacket.MsgSysLockGlobalSema{ pkt := &mhfpacket.MsgSysLockGlobalSema{
@@ -602,7 +602,7 @@ func TestHandleMsgSysLockGlobalSema_WithChannel(t *testing.T) {
GlobalID: "other-server", GlobalID: "other-server",
} }
channel.stages.Store("stage_user123", NewStage("stage_user123")) channel.stages.Store("stage_user123", NewStage("stage_user123"))
server.Channels = []*Server{channel} server.Registry = NewLocalChannelRegistry([]*Server{channel})
session := createMockSession(1, server) session := createMockSession(1, server)
@@ -633,7 +633,7 @@ func TestHandleMsgSysLockGlobalSema_SameServer(t *testing.T) {
GlobalID: "test-server", GlobalID: "test-server",
} }
channel.stages.Store("stage_user456", NewStage("stage_user456")) channel.stages.Store("stage_user456", NewStage("stage_user456"))
server.Channels = []*Server{channel} server.Registry = NewLocalChannelRegistry([]*Server{channel})
session := createMockSession(1, server) session := createMockSession(1, server)

View File

@@ -858,7 +858,7 @@ func TestHandleMsgSysUnlockGlobalSema_Coverage3(t *testing.T) {
func TestHandleMsgSysLockGlobalSema(t *testing.T) { func TestHandleMsgSysLockGlobalSema(t *testing.T) {
server := createMockServer() server := createMockServer()
server.Channels = make([]*Server, 0) server.Registry = NewLocalChannelRegistry(make([]*Server, 0))
t.Run("no_channels_returns_response", func(t *testing.T) { t.Run("no_channels_returns_response", func(t *testing.T) {
session := createMockSession(1, server) session := createMockSession(1, server)

View File

@@ -318,28 +318,7 @@ func handleMsgMhfOperateGuildMember(s *Session, p mhfpacket.MHFPacket) {
if err := s.server.mailRepo.SendMail(mail.SenderID, mail.RecipientID, mail.Subject, mail.Body, 0, 0, false, true); err != nil { if err := s.server.mailRepo.SendMail(mail.SenderID, mail.RecipientID, mail.Subject, mail.Body, 0, 0, false, true); err != nil {
s.logger.Warn("Failed to send guild member operation mail", zap.Error(err)) s.logger.Warn("Failed to send guild member operation mail", zap.Error(err))
} }
if s.server.Registry != nil { s.server.Registry.NotifyMailToCharID(pkt.CharID, s, &mail)
s.server.Registry.NotifyMailToCharID(pkt.CharID, s, &mail)
} else {
// Fallback: find the target session under lock, then notify outside the lock.
var targetSession *Session
for _, channel := range s.server.Channels {
channel.Lock()
for _, session := range channel.sessions {
if session.charID == pkt.CharID {
targetSession = session
break
}
}
channel.Unlock()
if targetSession != nil {
break
}
}
if targetSession != nil {
SendMailNotification(s, &mail, targetSession)
}
}
doAckSimpleSucceed(s, pkt.AckHandle, make([]byte, 4)) doAckSimpleSucceed(s, pkt.AckHandle, make([]byte, 4))
} }
} }

View File

@@ -12,7 +12,6 @@ import (
"erupe-ce/network/mhfpacket" "erupe-ce/network/mhfpacket"
"fmt" "fmt"
"io" "io"
"net"
"strings" "strings"
"time" "time"
@@ -442,19 +441,7 @@ func handleMsgSysEcho(s *Session, p mhfpacket.MHFPacket) {}
func handleMsgSysLockGlobalSema(s *Session, p mhfpacket.MHFPacket) { func handleMsgSysLockGlobalSema(s *Session, p mhfpacket.MHFPacket) {
pkt := p.(*mhfpacket.MsgSysLockGlobalSema) pkt := p.(*mhfpacket.MsgSysLockGlobalSema)
var sgid string sgid := s.server.Registry.FindChannelForStage(pkt.UserIDString)
if s.server.Registry != nil {
sgid = s.server.Registry.FindChannelForStage(pkt.UserIDString)
} else {
for _, channel := range s.server.Channels {
channel.stages.Range(func(id string, _ *Stage) bool {
if strings.HasSuffix(id, pkt.UserIDString) {
sgid = channel.GlobalID
}
return true
})
}
}
bf := byteframe.NewByteFrame() bf := byteframe.NewByteFrame()
if len(sgid) > 0 && sgid != s.server.GlobalID { if len(sgid) > 0 && sgid != s.server.GlobalID {
bf.WriteUint8(0) bf.WriteUint8(0)
@@ -517,59 +504,33 @@ func handleMsgMhfTransitMessage(s *Session, p mhfpacket.MHFPacket) {
resp.WriteUint16(0) resp.WriteUint16(0)
switch pkt.SearchType { switch pkt.SearchType {
case 1, 2, 3: // usersearchidx, usersearchname, lobbysearchname case 1, 2, 3: // usersearchidx, usersearchname, lobbysearchname
// Snapshot matching sessions under lock, then build response outside locks. predicate := func(snap SessionSnapshot) bool {
type sessionResult struct { switch pkt.SearchType {
charID uint32 case 1:
name []byte return snap.CharID == cid
stageID []byte case 2:
ip net.IP return strings.Contains(snap.Name, term)
port uint16 case 3:
userBin3 []byte return snap.ServerIP.String() == ip && snap.ServerPort == port && snap.StageID == term
}
var results []sessionResult
for _, c := range s.server.Channels {
if count == maxResults {
break
} }
c.Lock() return false
for _, session := range c.sessions {
if count == maxResults {
break
}
if pkt.SearchType == 1 && session.charID != cid {
continue
}
if pkt.SearchType == 2 && !strings.Contains(session.Name, term) {
continue
}
if pkt.SearchType == 3 && session.server.IP != ip && session.server.Port != port && session.stage.id != term {
continue
}
count++
results = append(results, sessionResult{
charID: session.charID,
name: stringsupport.UTF8ToSJIS(session.Name),
stageID: stringsupport.UTF8ToSJIS(session.stage.id),
ip: net.ParseIP(c.IP).To4(),
port: c.Port,
userBin3: c.userBinary.GetCopy(session.charID, 3),
})
}
c.Unlock()
} }
snapshots := s.server.Registry.SearchSessions(predicate, int(maxResults))
count = uint16(len(snapshots))
for _, r := range results { for _, snap := range snapshots {
if !local { if !local {
resp.WriteUint32(binary.LittleEndian.Uint32(r.ip)) resp.WriteUint32(binary.LittleEndian.Uint32(snap.ServerIP))
} else { } else {
resp.WriteUint32(localhostAddrLE) resp.WriteUint32(localhostAddrLE)
} }
resp.WriteUint16(r.port) resp.WriteUint16(snap.ServerPort)
resp.WriteUint32(r.charID) resp.WriteUint32(snap.CharID)
resp.WriteUint8(uint8(len(r.stageID) + 1)) sjisStageID := stringsupport.UTF8ToSJIS(snap.StageID)
resp.WriteUint8(uint8(len(r.name) + 1)) sjisName := stringsupport.UTF8ToSJIS(snap.Name)
resp.WriteUint16(uint16(len(r.userBin3))) resp.WriteUint8(uint8(len(sjisStageID) + 1))
resp.WriteUint8(uint8(len(sjisName) + 1))
resp.WriteUint16(uint16(len(snap.UserBinary3)))
// TODO: This case might be <=G2 // TODO: This case might be <=G2
if s.server.erupeConfig.RealClientMode <= cfg.G1 { if s.server.erupeConfig.RealClientMode <= cfg.G1 {
@@ -579,9 +540,9 @@ func handleMsgMhfTransitMessage(s *Session, p mhfpacket.MHFPacket) {
} }
resp.WriteBytes(make([]byte, 8)) resp.WriteBytes(make([]byte, 8))
resp.WriteNullTerminatedBytes(r.stageID) resp.WriteNullTerminatedBytes(sjisStageID)
resp.WriteNullTerminatedBytes(r.name) resp.WriteNullTerminatedBytes(sjisName)
resp.WriteBytes(r.userBin3) resp.WriteBytes(snap.UserBinary3)
} }
case 4: // lobbysearch case 4: // lobbysearch
type FindPartyParams struct { type FindPartyParams struct {
@@ -668,119 +629,81 @@ func handleMsgMhfTransitMessage(s *Session, p mhfpacket.MHFPacket) {
} }
} }
} }
// Snapshot matching stages under lock, then build response outside locks. allStages := s.server.Registry.SearchStages(findPartyParams.StagePrefix, int(maxResults))
type stageResult struct {
ip net.IP // Post-fetch filtering on snapshots (rank restriction, targets)
port uint16 type filteredStage struct {
clientCount int StageSnapshot
reserved int stageData []int16
maxPlayers uint16
stageID string
stageData []int16
rawBinData0 []byte
rawBinData1 []byte
} }
var stageResults []stageResult var stageResults []filteredStage
for _, snap := range allStages {
sb3 := byteframe.NewByteFrameFromBytes(snap.RawBinData3)
_, _ = sb3.Seek(4, 0)
for _, c := range s.server.Channels { stageDataParams := 7
if count == maxResults { if s.server.erupeConfig.RealClientMode <= cfg.G10 {
break stageDataParams = 4
} else if s.server.erupeConfig.RealClientMode <= cfg.Z1 {
stageDataParams = 6
} }
cIP := net.ParseIP(c.IP).To4()
cPort := c.Port var stageData []int16
c.stages.Range(func(_ string, stage *Stage) bool { for i := 0; i < stageDataParams; i++ {
if count == maxResults { if s.server.erupeConfig.RealClientMode >= cfg.Z1 {
return false stageData = append(stageData, sb3.ReadInt16())
} else {
stageData = append(stageData, int16(sb3.ReadInt8()))
} }
if strings.HasPrefix(stage.id, findPartyParams.StagePrefix) { }
stage.RLock()
sb3 := byteframe.NewByteFrameFromBytes(stage.rawBinaryData[stageBinaryKey{1, 3}])
_, _ = sb3.Seek(4, 0)
stageDataParams := 7 if findPartyParams.RankRestriction >= 0 {
if s.server.erupeConfig.RealClientMode <= cfg.G10 { if stageData[0] > findPartyParams.RankRestriction {
stageDataParams = 4 continue
} else if s.server.erupeConfig.RealClientMode <= cfg.Z1 {
stageDataParams = 6
}
var stageData []int16
for i := 0; i < stageDataParams; i++ {
if s.server.erupeConfig.RealClientMode >= cfg.Z1 {
stageData = append(stageData, sb3.ReadInt16())
} else {
stageData = append(stageData, int16(sb3.ReadInt8()))
}
}
if findPartyParams.RankRestriction >= 0 {
if stageData[0] > findPartyParams.RankRestriction {
stage.RUnlock()
return true
}
}
var hasTarget bool
if len(findPartyParams.Targets) > 0 {
for _, target := range findPartyParams.Targets {
if target == stageData[1] {
hasTarget = true
break
}
}
if !hasTarget {
stage.RUnlock()
return true
}
}
// Copy binary data under lock
bin0 := stage.rawBinaryData[stageBinaryKey{1, 0}]
bin0Copy := make([]byte, len(bin0))
copy(bin0Copy, bin0)
bin1 := stage.rawBinaryData[stageBinaryKey{1, 1}]
bin1Copy := make([]byte, len(bin1))
copy(bin1Copy, bin1)
count++
stageResults = append(stageResults, stageResult{
ip: cIP,
port: cPort,
clientCount: len(stage.clients) + len(stage.reservedClientSlots),
reserved: len(stage.reservedClientSlots),
maxPlayers: stage.maxPlayers,
stageID: stage.id,
stageData: stageData,
rawBinData0: bin0Copy,
rawBinData1: bin1Copy,
})
stage.RUnlock()
} }
return true }
if len(findPartyParams.Targets) > 0 {
var hasTarget bool
for _, target := range findPartyParams.Targets {
if target == stageData[1] {
hasTarget = true
break
}
}
if !hasTarget {
continue
}
}
stageResults = append(stageResults, filteredStage{
StageSnapshot: snap,
stageData: stageData,
}) })
} }
count = uint16(len(stageResults))
for _, sr := range stageResults { for _, sr := range stageResults {
if !local { if !local {
resp.WriteUint32(binary.LittleEndian.Uint32(sr.ip)) resp.WriteUint32(binary.LittleEndian.Uint32(sr.ServerIP))
} else { } else {
resp.WriteUint32(localhostAddrLE) resp.WriteUint32(localhostAddrLE)
} }
resp.WriteUint16(sr.port) resp.WriteUint16(sr.ServerPort)
resp.WriteUint16(0) // Static? resp.WriteUint16(0) // Static?
resp.WriteUint16(0) // Unk, [0 1 2] resp.WriteUint16(0) // Unk, [0 1 2]
resp.WriteUint16(uint16(sr.clientCount)) resp.WriteUint16(uint16(sr.ClientCount))
resp.WriteUint16(sr.maxPlayers) resp.WriteUint16(sr.MaxPlayers)
// TODO: Retail returned the number of clients in quests, not workshop/my series // TODO: Retail returned the number of clients in quests, not workshop/my series
resp.WriteUint16(uint16(sr.reserved)) resp.WriteUint16(uint16(sr.Reserved))
resp.WriteUint8(0) // Static? resp.WriteUint8(0) // Static?
resp.WriteUint8(uint8(sr.maxPlayers)) resp.WriteUint8(uint8(sr.MaxPlayers))
resp.WriteUint8(1) // Static? resp.WriteUint8(1) // Static?
resp.WriteUint8(uint8(len(sr.stageID) + 1)) resp.WriteUint8(uint8(len(sr.StageID) + 1))
resp.WriteUint8(uint8(len(sr.rawBinData0))) resp.WriteUint8(uint8(len(sr.RawBinData0)))
resp.WriteUint8(uint8(len(sr.rawBinData1))) resp.WriteUint8(uint8(len(sr.RawBinData1)))
for i := range sr.stageData { for i := range sr.stageData {
if s.server.erupeConfig.RealClientMode >= cfg.Z1 { if s.server.erupeConfig.RealClientMode >= cfg.Z1 {
@@ -792,9 +715,9 @@ func handleMsgMhfTransitMessage(s *Session, p mhfpacket.MHFPacket) {
resp.WriteUint8(0) // Unk resp.WriteUint8(0) // Unk
resp.WriteUint8(0) // Unk resp.WriteUint8(0) // Unk
resp.WriteNullTerminatedBytes([]byte(sr.stageID)) resp.WriteNullTerminatedBytes([]byte(sr.StageID))
resp.WriteBytes(sr.rawBinData0) resp.WriteBytes(sr.RawBinData0)
resp.WriteBytes(sr.rawBinData1) resp.WriteBytes(sr.RawBinData1)
} }
} }
_, _ = resp.Seek(0, io.SeekStart) _, _ = resp.Seek(0, io.SeekStart)

View File

@@ -281,7 +281,7 @@ func TestHandleMsgSysLockGlobalSema_RemoteMatch(t *testing.T) {
clients: make(map[*Session]uint32), clients: make(map[*Session]uint32),
reservedClientSlots: make(map[uint32]bool), reservedClientSlots: make(map[uint32]bool),
}) })
server.Channels = []*Server{server, otherChannel} server.Registry = NewLocalChannelRegistry([]*Server{server, otherChannel})
session := createMockSession(1, server) session := createMockSession(1, server)

View File

@@ -43,7 +43,6 @@ type Config struct {
// own locks internally and may be acquired at any point. // own locks internally and may be acquired at any point.
type Server struct { type Server struct {
sync.Mutex sync.Mutex
Channels []*Server
Registry ChannelRegistry Registry ChannelRegistry
ID uint16 ID uint16
GlobalID string GlobalID string
@@ -332,16 +331,7 @@ func (s *Server) BroadcastMHF(pkt mhfpacket.MHFPacket, ignoredSession *Session)
// WorldcastMHF broadcasts a packet to all sessions across all channel servers. // WorldcastMHF broadcasts a packet to all sessions across all channel servers.
func (s *Server) WorldcastMHF(pkt mhfpacket.MHFPacket, ignoredSession *Session, ignoredChannel *Server) { func (s *Server) WorldcastMHF(pkt mhfpacket.MHFPacket, ignoredSession *Session, ignoredChannel *Server) {
if s.Registry != nil { s.Registry.Worldcast(pkt, ignoredSession, ignoredChannel)
s.Registry.Worldcast(pkt, ignoredSession, ignoredChannel)
return
}
for _, c := range s.Channels {
if c == ignoredChannel {
continue
}
c.BroadcastMHF(pkt, ignoredSession)
}
} }
// BroadcastChatMessage broadcasts a simple chat message to all the sessions. // BroadcastChatMessage broadcasts a simple chat message to all the sessions.
@@ -382,20 +372,7 @@ func (s *Server) DiscordScreenShotSend(charName string, title string, descriptio
// FindSessionByCharID looks up a session by character ID across all channels. // FindSessionByCharID looks up a session by character ID across all channels.
func (s *Server) FindSessionByCharID(charID uint32) *Session { func (s *Server) FindSessionByCharID(charID uint32) *Session {
if s.Registry != nil { return s.Registry.FindSessionByCharID(charID)
return s.Registry.FindSessionByCharID(charID)
}
for _, c := range s.Channels {
c.Lock()
for _, session := range c.sessions {
if session.charID == charID {
c.Unlock()
return session
}
}
c.Unlock()
}
return nil
} }
// DisconnectUser disconnects all sessions belonging to the given user ID. // DisconnectUser disconnects all sessions belonging to the given user ID.
@@ -404,22 +381,7 @@ func (s *Server) DisconnectUser(uid uint32) {
if err != nil { if err != nil {
s.logger.Error("Failed to query characters for disconnect", zap.Error(err)) s.logger.Error("Failed to query characters for disconnect", zap.Error(err))
} }
if s.Registry != nil { s.Registry.DisconnectUser(cids)
s.Registry.DisconnectUser(cids)
return
}
for _, c := range s.Channels {
c.Lock()
for _, session := range c.sessions {
for _, cid := range cids {
if session.charID == cid {
_ = session.rawConn.Close()
break
}
}
}
c.Unlock()
}
} }
// FindObjectByChar finds a stage object owned by the given character ID. // FindObjectByChar finds a stage object owned by the given character ID.

View File

@@ -52,7 +52,7 @@ func (m *mockConn) WasClosed() bool {
// createTestServer creates a test server instance // createTestServer creates a test server instance
func createTestServer() *Server { func createTestServer() *Server {
logger, _ := zap.NewDevelopment() logger, _ := zap.NewDevelopment()
return &Server{ s := &Server{
ID: 1, ID: 1,
logger: logger, logger: logger,
sessions: make(map[net.Conn]*Session), sessions: make(map[net.Conn]*Session),
@@ -71,6 +71,8 @@ func createTestServer() *Server {
support: make([]uint32, 30), support: make([]uint32, 30),
}, },
} }
s.Registry = NewLocalChannelRegistry([]*Server{s})
return s
} }
// createTestSessionForServer creates a session for a specific server // createTestSessionForServer creates a session for a specific server
@@ -296,7 +298,7 @@ func TestBroadcastMHFAllSessions(t *testing.T) {
// TestFindSessionByCharID tests finding sessions by character ID // TestFindSessionByCharID tests finding sessions by character ID
func TestFindSessionByCharID(t *testing.T) { func TestFindSessionByCharID(t *testing.T) {
server := createTestServer() server := createTestServer()
server.Channels = []*Server{server} // Add itself as a channel server.Registry = NewLocalChannelRegistry([]*Server{server})
// Create sessions with different char IDs // Create sessions with different char IDs
charIDs := []uint32{100, 200, 300} charIDs := []uint32{100, 200, 300}

View File

@@ -55,17 +55,19 @@ func createTestSession(mock network.Conn) *Session {
// Create a production logger for testing (will output to stderr) // Create a production logger for testing (will output to stderr)
logger, _ := zap.NewProduction() logger, _ := zap.NewProduction()
server := &Server{
erupeConfig: &cfg.Config{
DebugOptions: cfg.DebugOptions{
LogOutboundMessages: false,
},
},
}
server.Registry = NewLocalChannelRegistry([]*Server{server})
s := &Session{ s := &Session{
logger: logger, logger: logger,
sendPackets: make(chan packet, 20), sendPackets: make(chan packet, 20),
cryptConn: mock, cryptConn: mock,
server: &Server{ server: server,
erupeConfig: &cfg.Config{
DebugOptions: cfg.DebugOptions{
LogOutboundMessages: false,
},
},
},
} }
return s return s
} }

View File

@@ -50,6 +50,7 @@ func createMockServer() *Server {
}, },
} }
s.i18n = getLangStrings(s) s.i18n = getLangStrings(s)
s.Registry = NewLocalChannelRegistry([]*Server{s})
return s return s
} }