Compare commits

...

31 Commits

Author SHA1 Message Date
douxu 1b1f43db7f feat: implement topology analysis async task with BFS connectivity check
- add TopologyAnalysisHandler.Execute() with 5-phase BFS reachability
    check between start/end component UUIDs; support CheckInService flag
    to skip out-of-service nodes during traversal
  - carry task params through RabbitMQ message (TaskQueueMessage.Params)
    instead of re-querying DB in handler; update TaskHandler.Execute
    interface and all handler signatures accordingly
  - fix BuildMultiBranchTree UUIDFrom condition bug; return nodeMap for
    O(1) lookup; add QueryTopologicByStartUUID for directed traversal
  - add QueryBayByUUID/QueryBaysByUUIDs and
    QueryComponentsInServiceByUUIDs (two-column select) to database layer
  - add diagram.FindPath via LCA algorithm for tree path reconstruction
  - move initTracerProvider to middleware.InitTracerProvider; add
    OtelConfig struct to ModelRTConfig for endpoint configuration
  - update topology analysis params to start/end_component_uuid +
    check_in_service; remove dead topology init code
2026-04-24 17:14:46 +08:00
douxu 03bd058558 feat: implement end-to-end distributed tracing for HTTP and async tasks
- introduce typed traceCtxKey to prevent context key collisions (staticcheck fix)
  - inject B3 trace values into c.Request.Context() in StartTrace middleware
    so handlers using c.Request.Context() carry trace info
  - create startup trace context in main.go, replacing context.TODO()
  - propagate HTTP traceID/spanID through TaskQueueMessage into RabbitMQ
    worker, linking HTTP request → publish → execution on the same traceID
  - fix GORM logger null traceID by binding ctx to AutoMigrate and queries
    via db.WithContext(ctx)
  - thread ctx through handler factory to fix null traceID in startup logs
  - replace per-request RabbitMQ producer with channel-based
    PushTaskToRabbitMQ goroutine; restrict Swagger to non-production
2026-04-23 16:48:32 +08:00
douxu 809e1cd87d Refactor: extract task constants to dedicated constants package
- Add constants/task.go with centralized task-related constants
    - Task priority levels (default, high, low)
    - Task queue configuration (exchange, queue, routing key)
    - Task message settings (max priority, TTL)
    - Task retry settings (max retries, delays)
    - Test task settings (sleep duration, max limit)

  - Update task-related files to use constants from constants package:
    - handler/async_task_create_handler.go
    - task/queue_message.go
    - task/queue_producer.go
    - task/retry_manager.go
    - task/test_task.go
    - task/types.go (add TypeTest)
    - task/worker.go
2026-04-22 17:20:26 +08:00
douxu 4a3f7a65bc Refactor async task handlers into specialized handlers
Split monolithic async_task_handler.go into separate handlers:
- async_task_cancel_handler.go: Handles task cancellation
- async_task_create_handler.go: Handles task creation
- async_task_progress_update_handler.go: Handles progress updates
- async_task_result_detail_handler.go: Handles result details
- async_task_result_query_handler.go: Handles result queries
- async_task_status_update_handler.go: Handles status updates

This improves code organization and maintainability by separating concerns.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-17 14:09:02 +08:00
douxu 4d5fcbc376 Refactor async task system with unified task interfaces and add test task type
- Create task/types_v2.go with unified task type definitions and interfaces
    * Add UnifiedTaskType and UnifiedTaskStatus constants
    * Define Task:Params interface for parameter validation and serialization
    * Define UnifiedTask interface as base for all task implementations
    * Add BaseTask for common task functionality
2026-04-14 17:00:30 +08:00
douxu f8c0951a13 Extend async task system with database integration and retry management
- Add AsyncTaskConfig to config structure
  - Create database operations for task state management (async_task_extended.go)
  - Add configuration middleware for Gin context
  - Extract task worker initialization to separate file (initializer.go)
  - Implement retry strategies with exponential backoff (retry_manager.go)
  - Add retry queue for failed task scheduling (retry_queue.go)
  - Enhance worker metrics with detailed per-task-type tracking
  - Integrate database operations into task worker for status updates
  - Add comprehensive metrics logging system
2026-04-03 10:07:43 +08:00
douxu 9e4c35794c implemented task queue publishing using RabbitMQ
Added configuration middleware integration
  Added retry logic for queue publishing
  Added task worker initialization (main.go):

Created initTaskWorker function for task worker configuration
  Added worker startup and shutdown logic
  Added CORS middleware configuration
  Registered config middleware
2026-04-01 17:15:33 +08:00
douxu 7ea66e48af add code of async task system 2026-03-20 15:00:04 +08:00
douxu de5f976c31 add route of async task system 2026-03-17 16:08:46 +08:00
douxu adcc8c6c91 add code of async task system 2026-03-13 11:45:22 +08:00
douxu 6e0d2186d8 optimize code of async task system 2026-03-12 16:37:06 +08:00
douxu a94abdb479 initialize the asynchronous task system's initial structure 2026-03-05 17:15:51 +08:00
douxu 898beaeec4 optimize struct of rabbitmq event 2026-03-02 17:00:09 +08:00
douxu 4b52e5f3c6 optimize code of event record and push rabbitmq func 2026-02-28 17:38:33 +08:00
douxu f6bb3fb985 optimize code of push event to rabbitmq 2026-02-26 16:48:12 +08:00
douxu 2ececc38d9 optimzie code organization structure of rabbitmq event 2026-02-25 17:14:25 +08:00
douxu 6c9da6fcd4 init event struct with option mode 2026-02-24 17:08:48 +08:00
douxu 56b9999d6b add constants varibale of power system events 2026-02-12 17:09:08 +08:00
douxu 1c385ee60d optimize code of rabbitmq connection and event alarm struct 2026-02-11 16:43:42 +08:00
douxu 6618209bcc optimzie code of rabbitmq connection 2026-02-06 17:45:59 +08:00
douxu 581153ed8d add git ignore item of mask certificate files 2026-02-05 17:01:16 +08:00
douxu f45b7d5fa4 optimize code of init rabbitmq connect func 2026-02-04 17:43:09 +08:00
douxu 9be984899c optimize code of push event alarm func 2026-02-03 17:05:32 +08:00
douxu 35cb969a54 add code of inter-module communication 2026-02-02 16:48:46 +08:00
douxu 02e0c9c31a optimzie of postgres db code 2026-01-30 17:42:50 +08:00
douxu 2126aa7b06 optimize code of config 2026-01-29 17:00:20 +08:00
douxu 3374eec047 optimize code of redis init 2026-01-28 16:49:12 +08:00
douxu 3ff29cc072 optimize code of real time data pull api 2026-01-28 14:03:25 +08:00
douxu 617d21500e optimize code of redis connenct func and real time data calculate 2026-01-27 17:41:17 +08:00
douxu 1a1727adab optimize reponse code and business code of measurement sub api 2026-01-26 16:29:50 +08:00
douxu fd2b202037 optimize code of websocket close handler 2026-01-22 16:19:00 +08:00
112 changed files with 7352 additions and 1213 deletions

13
.gitignore vendored
View File

@ -27,3 +27,16 @@ go.work
/log/
# Shield config files in the configs folder
/configs/**/*.yaml
/configs/**/*.pem
# ai config
.cursor/
.claude/
.cursorrules
.copilot/
.chatgpt/
.ai_history/
.vector_cache/
ai-debug.log
*.patch
*.diff

View File

@ -16,6 +16,12 @@ var (
// ErrFoundTargetFailed define variable to returned when the specific database table cannot be identified using the provided token info.
ErrFoundTargetFailed = newError(40004, "found target table by token failed")
// ErrSubTargetRepeat define variable to indicates subscription target already exist in list
ErrSubTargetRepeat = newError(40005, "subscription target already exist in list")
// ErrSubTargetNotFound define variable to indicates can not find measurement by subscription target
ErrSubTargetNotFound = newError(40006, "found measuremnet by subscription target failed")
// ErrCancelSubTargetMissing define variable to indicates cancel a not exist subscription target
ErrCancelSubTargetMissing = newError(40007, "cancel a not exist subscription target")
// ErrDBQueryFailed define variable to represents a generic failure during a PostgreSQL SELECT or SCAN operation.
ErrDBQueryFailed = newError(50001, "query postgres database data failed")

View File

@ -0,0 +1,10 @@
// Package common define common error variables
package common
import "errors"
// ErrUnknowEventActionCommand define error of unknown event action command
var ErrUnknowEventActionCommand = errors.New("unknown action command")
// ErrExecEventActionFailed define error of execute event action failed
var ErrExecEventActionFailed = errors.New("exec event action func failed")

View File

@ -1,5 +1,5 @@
// Package constants define constant variable
package constants
// Package common define common error variables
package common
import "errors"

View File

@ -44,10 +44,11 @@ var baseCurrentFunc = func(archorValue float64, args ...float64) float64 {
// SelectAnchorCalculateFuncAndParams define select anchor func and anchor calculate value by component type 、 anchor name and component data
func SelectAnchorCalculateFuncAndParams(componentType int, anchorName string, componentData map[string]interface{}) (func(archorValue float64, args ...float64) float64, []float64) {
if componentType == constants.DemoType {
if anchorName == "voltage" {
switch anchorName {
case "voltage":
resistance := componentData["resistance"].(float64)
return baseVoltageFunc, []float64{resistance}
} else if anchorName == "current" {
case "current":
resistance := componentData["resistance"].(float64)
return baseCurrentFunc, []float64{resistance}
}

View File

@ -3,6 +3,7 @@ package config
import (
"fmt"
"time"
"github.com/spf13/viper"
)
@ -19,6 +20,21 @@ type ServiceConfig struct {
ServiceAddr string `mapstructure:"service_addr"`
ServiceName string `mapstructure:"service_name"`
SecretKey string `mapstructure:"secret_key"`
DeployEnv string `mapstructure:"deploy_env"`
}
// RabbitMQConfig define config struct of RabbitMQ config
type RabbitMQConfig struct {
CACertPath string `mapstructure:"ca_cert_path"`
ClientKeyPath string `mapstructure:"client_key_path"`
ClientKeyPassword string `mapstructure:"client_key_password"`
ClientCertPath string `mapstructure:"client_cert_path"`
InsecureSkipVerify bool `mapstructure:"insecure_skip_verify"`
ServerName string `mapstructure:"server_name"`
User string `mapstructure:"user"`
Password string `mapstructure:"password"`
Host string `mapstructure:"host"`
Port int `mapstructure:"port"`
}
// KafkaConfig define config struct of kafka config
@ -57,7 +73,9 @@ type RedisConfig struct {
Password string `mapstructure:"password"`
DB int `mapstructure:"db"`
PoolSize int `mapstructure:"poolsize"`
Timeout int `mapstructure:"timeout"`
DialTimeout int `mapstructure:"dial_timeout"`
ReadTimeout int `mapstructure:"read_timeout"`
WriteTimeout int `mapstructure:"write_timeout"`
}
// AntsConfig define config struct of ants pool config
@ -74,17 +92,36 @@ type DataRTConfig struct {
Method string `mapstructure:"polling_api_method"`
}
// OtelConfig define config struct of OpenTelemetry tracing
type OtelConfig struct {
Endpoint string `mapstructure:"endpoint"` // e.g. "localhost:4318"
Insecure bool `mapstructure:"insecure"`
}
// AsyncTaskConfig define config struct of asynchronous task system
type AsyncTaskConfig struct {
WorkerPoolSize int `mapstructure:"worker_pool_size"`
QueueConsumerCount int `mapstructure:"queue_consumer_count"`
MaxRetryCount int `mapstructure:"max_retry_count"`
RetryInitialDelay time.Duration `mapstructure:"retry_initial_delay"`
RetryMaxDelay time.Duration `mapstructure:"retry_max_delay"`
HealthCheckInterval time.Duration `mapstructure:"health_check_interval"`
}
// ModelRTConfig define config struct of model runtime server
type ModelRTConfig struct {
BaseConfig `mapstructure:"base"`
ServiceConfig `mapstructure:"service"`
PostgresConfig `mapstructure:"postgres"`
RabbitMQConfig `mapstructure:"rabbitmq"`
KafkaConfig `mapstructure:"kafka"`
LoggerConfig `mapstructure:"logger"`
AntsConfig `mapstructure:"ants"`
DataRTConfig `mapstructure:"dataRT"`
LockerRedisConfig RedisConfig `mapstructure:"locker_redis"`
StorageRedisConfig RedisConfig `mapstructure:"storage_redis"`
AsyncTaskConfig AsyncTaskConfig `mapstructure:"async_task"`
OtelConfig OtelConfig `mapstructure:"otel"`
PostgresDBURI string `mapstructure:"-"`
}

View File

@ -1,17 +0,0 @@
// Package constants define constant variable
package constants
const (
// CodeSuccess define constant to indicates that the API was successfully processed
CodeSuccess = 20000
// CodeInvalidParamFailed define constant to indicates request parameter parsing failed
CodeInvalidParamFailed = 40001
// CodeDBQueryFailed define constant to indicates database query operation failed
CodeDBQueryFailed = 50001
// CodeDBUpdateailed define constant to indicates database update operation failed
CodeDBUpdateailed = 50002
// CodeRedisQueryFailed define constant to indicates redis query operation failed
CodeRedisQueryFailed = 60001
// CodeRedisUpdateFailed define constant to indicates redis update operation failed
CodeRedisUpdateFailed = 60002
)

View File

@ -0,0 +1,31 @@
// Package constants define constant variable
package constants
const (
// CodeSuccess define constant to indicates that the API was successfully processed
CodeSuccess = 20000
// CodeInvalidParamFailed define constant to indicates request parameter parsing failed
CodeInvalidParamFailed = 40001
// CodeFoundTargetFailed define variable to returned when the specific database table cannot be identified using the provided token info.
CodeFoundTargetFailed = 40004
// CodeSubTargetRepeat define variable to indicates subscription target already exist in list
CodeSubTargetRepeat = 40005
// CodeSubTargetNotFound define variable to indicates can not find measurement by subscription target
CodeSubTargetNotFound = 40006
// CodeCancelSubTargetMissing define variable to indicates cancel a not exist subscription target
CodeCancelSubTargetMissing = 40007
// CodeUpdateSubTargetMissing define variable to indicates update a not exist subscription target
CodeUpdateSubTargetMissing = 40008
// CodeAppendSubTargetMissing define variable to indicates append a not exist subscription target
CodeAppendSubTargetMissing = 40009
// CodeUnsupportSubOperation define variable to indicates append a not exist subscription target
CodeUnsupportSubOperation = 40010
// CodeDBQueryFailed define constant to indicates database query operation failed
CodeDBQueryFailed = 50001
// CodeDBUpdateailed define constant to indicates database update operation failed
CodeDBUpdateailed = 50002
// CodeRedisQueryFailed define constant to indicates redis query operation failed
CodeRedisQueryFailed = 60001
// CodeRedisUpdateFailed define constant to indicates redis update operation failed
CodeRedisUpdateFailed = 60002
)

11
constants/deploy_mode.go Normal file
View File

@ -0,0 +1,11 @@
// Package constants define constant variable
package constants
const (
// DevelopmentDeployMode define development operator environment for modelRT project
DevelopmentDeployMode = "development"
// DebugDeployMode define debug operator environment for modelRT project
DebugDeployMode = "debug"
// ProductionDeployMode define production operator environment for modelRT project
ProductionDeployMode = "production"
)

View File

@ -1,31 +1,92 @@
// Package constants define constant variable
package constants
// EvenvtType define event type
type EvenvtType int
const (
// TIBreachTriggerType define out of bounds type constant
TIBreachTriggerType = "trigger"
// EventGeneralHard define gereral hard event type
EventGeneralHard EvenvtType = iota
// EventGeneralPlatformSoft define gereral platform soft event type
EventGeneralPlatformSoft
// EventGeneralApplicationSoft define gereral application soft event type
EventGeneralApplicationSoft
// EventWarnHard define warn hard event type
EventWarnHard
// EventWarnPlatformSoft define warn platform soft event type
EventWarnPlatformSoft
// EventWarnApplicationSoft define warn application soft event type
EventWarnApplicationSoft
// EventCriticalHard define critical hard event type
EventCriticalHard
// EventCriticalPlatformSoft define critical platform soft event type
EventCriticalPlatformSoft
// EventCriticalApplicationSoft define critical application soft event type
EventCriticalApplicationSoft
)
// IsGeneral define fucn to check event type is general
func IsGeneral(eventType EvenvtType) bool {
return eventType < 3
}
// IsWarning define fucn to check event type is warn
func IsWarning(eventType EvenvtType) bool {
return eventType >= 3 && eventType <= 5
}
// IsCritical define fucn to check event type is critical
func IsCritical(eventType EvenvtType) bool {
return eventType >= 6
}
const (
// EventFromStation define event from station type
EventFromStation = "station"
// EventFromPlatform define event from platform type
EventFromPlatform = "platform"
// EventFromOthers define event from others type
EventFromOthers = "others"
)
const (
// TelemetryUpLimit define telemetry upper limit
TelemetryUpLimit = "up"
// TelemetryUpUpLimit define telemetry upper upper limit
TelemetryUpUpLimit = "upup"
// TelemetryDownLimit define telemetry limit
TelemetryDownLimit = "down"
// TelemetryDownDownLimit define telemetry lower lower limit
TelemetryDownDownLimit = "downdown"
// EventStatusHappended define status for event record when event just happened, no data attached yet
EventStatusHappended = iota
// EventStatusDataAttached define status for event record when event just happened, data attached already
EventStatusDataAttached
// EventStatusReported define status for event record when event reported to CIM, no matter it's successful or failed
EventStatusReported
// EventStatusConfirmed define status for event record when event confirmed by CIM, no matter it's successful or failed
EventStatusConfirmed
// EventStatusPersisted define status for event record when event persisted in database, no matter it's successful or failed
EventStatusPersisted
// EventStatusClosed define status for event record when event closed, no matter it's successful or failed
EventStatusClosed
)
const (
// TelesignalRaising define telesignal raising edge
TelesignalRaising = "raising"
// TelesignalFalling define telesignal falling edge
TelesignalFalling = "falling"
// EventExchangeName define exchange name for event alarm message
EventExchangeName = "event-exchange"
// EventDeadExchangeName define dead letter exchange name for event alarm message
EventDeadExchangeName = "event-dead-letter-exchange"
)
const (
// MinBreachCount define min breach count of real time data
MinBreachCount = 10
// EventUpDownRoutingKey define routing key for up or down limit event alarm message
EventUpDownRoutingKey = "event.#"
// EventUpDownDeadRoutingKey define dead letter routing key for up or down limit event alarm message
EventUpDownDeadRoutingKey = "event.#"
// EventUpDownQueueName define queue name for up or down limit event alarm message
EventUpDownQueueName = "event-up-down-queue"
// EventUpDownDeadQueueName define dead letter queue name for event alarm message
EventUpDownDeadQueueName = "event-dead-letter-queue"
)
const (
// EventGeneralUpDownLimitCategroy define category for general up and down limit event
EventGeneralUpDownLimitCategroy = "event.general.updown.limit"
// EventWarnUpDownLimitCategroy define category for warn up and down limit event
EventWarnUpDownLimitCategroy = "event.warn.updown.limit"
// EventCriticalUpDownLimitCategroy define category for critical up and down limit event
EventCriticalUpDownLimitCategroy = "event.critical.updown.limit"
)

View File

@ -12,29 +12,6 @@ const (
SubUpdateAction string = "update"
)
// 定义状态常量
// TODO 从4位格式修改为5位格式
const (
// SubSuccessCode define subscription success code
SubSuccessCode = "1001"
// SubFailedCode define subscription failed code
SubFailedCode = "1002"
// RTDSuccessCode define real time data return success code
RTDSuccessCode = "1003"
// RTDFailedCode define real time data return failed code
RTDFailedCode = "1004"
// CancelSubSuccessCode define cancel subscription success code
CancelSubSuccessCode = "1005"
// CancelSubFailedCode define cancel subscription failed code
CancelSubFailedCode = "1006"
// SubRepeatCode define subscription repeat code
SubRepeatCode = "1007"
// UpdateSubSuccessCode define update subscription success code
UpdateSubSuccessCode = "1008"
// UpdateSubFailedCode define update subscription failed code
UpdateSubFailedCode = "1009"
)
const (
// SysCtrlPrefix define to indicates the prefix for all system control directives,facilitating unified parsing within the sendDataStream goroutine
SysCtrlPrefix = "SYS_CTRL_"

54
constants/task.go Normal file
View File

@ -0,0 +1,54 @@
// Package constants defines task-related constants for the async task system
package constants
import "time"
// Task priority levels
const (
// TaskPriorityDefault is the default priority level for tasks
TaskPriorityDefault = 5
// TaskPriorityHigh represents high priority tasks
TaskPriorityHigh = 10
// TaskPriorityLow represents low priority tasks
TaskPriorityLow = 1
)
// Task queue configuration
const (
// TaskExchangeName is the name of the exchange for task routing
TaskExchangeName = "modelrt.tasks.exchange"
// TaskQueueName is the name of the main task queue
TaskQueueName = "modelrt.tasks.queue"
// TaskRoutingKey is the routing key for task messages
TaskRoutingKey = "modelrt.task"
)
// Task message settings
const (
// TaskMaxPriority is the maximum priority level for tasks (0-10)
TaskMaxPriority = 10
// TaskDefaultMessageTTL is the default time-to-live for task messages (24 hours)
TaskDefaultMessageTTL = 24 * time.Hour
)
// Task retry settings
const (
// TaskRetryMaxDefault is the default maximum number of retry attempts
TaskRetryMaxDefault = 3
// TaskRetryInitialDelayDefault is the default initial delay for exponential backoff
TaskRetryInitialDelayDefault = 1 * time.Second
// TaskRetryMaxDelayDefault is the default maximum delay for exponential backoff
TaskRetryMaxDelayDefault = 5 * time.Minute
// TaskRetryRandomFactorDefault is the default random factor for jitter (10%)
TaskRetryRandomFactorDefault = 0.1
// TaskRetryFixedDelayDefault is the default delay for fixed retry strategy
TaskRetryFixedDelayDefault = 5 * time.Second
)
// Test task settings
const (
// TestTaskSleepDurationDefault is the default sleep duration for test tasks (60 seconds)
TestTaskSleepDurationDefault = 60
// TestTaskSleepDurationMax is the maximum allowed sleep duration for test tasks (1 hour)
TestTaskSleepDurationMax = 3600
)

View File

@ -0,0 +1,31 @@
// Package constants define constant variable
package constants
const (
// TIBreachTriggerType define out of bounds type constant
TIBreachTriggerType = "trigger"
)
const (
// TelemetryUpLimit define telemetry upper limit
TelemetryUpLimit = "up"
// TelemetryUpUpLimit define telemetry upper upper limit
TelemetryUpUpLimit = "upup"
// TelemetryDownLimit define telemetry limit
TelemetryDownLimit = "down"
// TelemetryDownDownLimit define telemetry lower lower limit
TelemetryDownDownLimit = "downdown"
)
const (
// TelesignalRaising define telesignal raising edge
TelesignalRaising = "raising"
// TelesignalFalling define telesignal falling edge
TelesignalFalling = "falling"
)
const (
// MinBreachCount define min breach count of real time data
MinBreachCount = 10
)

View File

@ -7,3 +7,13 @@ const (
HeaderSpanID = "X-B3-SpanId"
HeaderParentSpanID = "X-B3-ParentSpanId"
)
// traceCtxKey is an unexported type for context keys to avoid collisions with other packages.
type traceCtxKey string
// Typed context keys for trace values — use these with context.WithValue / ctx.Value.
var (
CtxKeyTraceID = traceCtxKey(HeaderTraceID)
CtxKeySpanID = traceCtxKey(HeaderSpanID)
CtxKeyParentSpanID = traceCtxKey(HeaderParentSpanID)
)

View File

@ -0,0 +1,227 @@
// Package database define database operation functions
package database
import (
"context"
"time"
"modelRT/orm"
"github.com/gofrs/uuid"
"gorm.io/gorm"
)
// UpdateTaskStarted updates task start time and status to running
func UpdateTaskStarted(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, startedAt int64) error {
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Model(&orm.AsyncTask{}).
Where("task_id = ?", taskID).
Updates(map[string]any{
"status": orm.AsyncTaskStatusRunning,
"started_at": startedAt,
})
return result.Error
}
// UpdateTaskRetryInfo updates task retry information
func UpdateTaskRetryInfo(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, retryCount int, nextRetryTime int64) error {
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
updateData := map[string]any{
"retry_count": retryCount,
}
if nextRetryTime <= 0 {
updateData["next_retry_time"] = nil
} else {
updateData["next_retry_time"] = nextRetryTime
}
result := tx.WithContext(cancelCtx).
Model(&orm.AsyncTask{}).
Where("task_id = ?", taskID).
Updates(updateData)
return result.Error
}
// UpdateTaskErrorInfo updates task error information
func UpdateTaskErrorInfo(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, errorMsg, stackTrace string) error {
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Model(&orm.AsyncTask{}).
Where("task_id = ?", taskID).
Updates(map[string]any{
"failure_reason": errorMsg,
"stack_trace": stackTrace,
})
return result.Error
}
// UpdateTaskExecutionTime updates task execution time
func UpdateTaskExecutionTime(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, executionTime int64) error {
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Model(&orm.AsyncTask{}).
Where("task_id = ?", taskID).
Update("execution_time", executionTime)
return result.Error
}
// UpdateTaskWorkerID updates the worker ID that is processing the task
func UpdateTaskWorkerID(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, workerID string) error {
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Model(&orm.AsyncTask{}).
Where("task_id = ?", taskID).
Update("worker_id", workerID)
return result.Error
}
// UpdateTaskPriority updates task priority
func UpdateTaskPriority(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, priority int) error {
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Model(&orm.AsyncTask{}).
Where("task_id = ?", taskID).
Update("priority", priority)
return result.Error
}
// UpdateTaskQueueName updates task queue name
func UpdateTaskQueueName(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, queueName string) error {
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Model(&orm.AsyncTask{}).
Where("task_id = ?", taskID).
Update("queue_name", queueName)
return result.Error
}
// UpdateTaskCreatedBy updates task creator information
func UpdateTaskCreatedBy(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, createdBy string) error {
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Model(&orm.AsyncTask{}).
Where("task_id = ?", taskID).
Update("created_by", createdBy)
return result.Error
}
// UpdateTaskResultWithMetrics updates task result with execution metrics
func UpdateTaskResultWithMetrics(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, executionTime int64, memoryUsage *int64, cpuUsage *float64, retryCount int, completedAt int64) error {
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Model(&orm.AsyncTaskResult{}).
Where("task_id = ?", taskID).
Updates(map[string]any{
"execution_time": executionTime,
"memory_usage": memoryUsage,
"cpu_usage": cpuUsage,
"retry_count": retryCount,
"completed_at": completedAt,
})
return result.Error
}
// GetTasksForRetry retrieves tasks that are due for retry
func GetTasksForRetry(ctx context.Context, tx *gorm.DB, limit int) ([]orm.AsyncTask, error) {
var tasks []orm.AsyncTask
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
now := time.Now().Unix()
result := tx.WithContext(cancelCtx).
Where("status = ? AND next_retry_time IS NOT NULL AND next_retry_time <= ?", orm.AsyncTaskStatusFailed, now).
Order("next_retry_time ASC").
Limit(limit).
Find(&tasks)
if result.Error != nil {
return nil, result.Error
}
return tasks, nil
}
// GetTasksByPriority retrieves tasks by priority order
func GetTasksByPriority(ctx context.Context, tx *gorm.DB, status orm.AsyncTaskStatus, limit int) ([]orm.AsyncTask, error) {
var tasks []orm.AsyncTask
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Where("status = ?", status).
Order("priority DESC, created_at ASC").
Limit(limit).
Find(&tasks)
if result.Error != nil {
return nil, result.Error
}
return tasks, nil
}
// GetTasksByWorkerID retrieves tasks being processed by a specific worker
func GetTasksByWorkerID(ctx context.Context, tx *gorm.DB, workerID string) ([]orm.AsyncTask, error) {
var tasks []orm.AsyncTask
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Where("worker_id = ? AND status = ?", workerID, orm.AsyncTaskStatusRunning).
Find(&tasks)
if result.Error != nil {
return nil, result.Error
}
return tasks, nil
}
// CleanupStaleTasks marks tasks as failed if they have been running for too long
func CleanupStaleTasks(ctx context.Context, tx *gorm.DB, timeoutSeconds int64) (int64, error) {
cancelCtx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()
threshold := time.Now().Unix() - timeoutSeconds
result := tx.WithContext(cancelCtx).
Model(&orm.AsyncTask{}).
Where("status = ? AND started_at IS NOT NULL AND started_at < ?", orm.AsyncTaskStatusRunning, threshold).
Updates(map[string]any{
"status": orm.AsyncTaskStatusFailed,
"failure_reason": "task timeout",
"finished_at": time.Now().Unix(),
})
return result.RowsAffected, result.Error
}

View File

@ -0,0 +1,321 @@
// Package database define database operation functions
package database
import (
"context"
"time"
"modelRT/orm"
"github.com/gofrs/uuid"
"gorm.io/gorm"
"gorm.io/gorm/clause"
)
// CreateAsyncTask creates a new async task in the database
func CreateAsyncTask(ctx context.Context, tx *gorm.DB, taskType orm.AsyncTaskType, params orm.JSONMap) (*orm.AsyncTask, error) {
taskID, err := uuid.NewV4()
if err != nil {
return nil, err
}
task := &orm.AsyncTask{
TaskID: taskID,
TaskType: taskType,
Status: orm.AsyncTaskStatusSubmitted,
Params: params,
CreatedAt: time.Now().Unix(),
}
// ctx timeout judgment
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).Create(task)
if result.Error != nil {
return nil, result.Error
}
return task, nil
}
// GetAsyncTaskByID retrieves an async task by its ID
func GetAsyncTaskByID(ctx context.Context, tx *gorm.DB, taskID uuid.UUID) (*orm.AsyncTask, error) {
var task orm.AsyncTask
// ctx timeout judgment
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Where("task_id = ?", taskID).
Clauses(clause.Locking{Strength: "UPDATE"}).
First(&task)
if result.Error != nil {
return nil, result.Error
}
return &task, nil
}
// GetAsyncTasksByIDs retrieves multiple async tasks by their IDs
func GetAsyncTasksByIDs(ctx context.Context, tx *gorm.DB, taskIDs []uuid.UUID) ([]orm.AsyncTask, error) {
var tasks []orm.AsyncTask
if len(taskIDs) == 0 {
return tasks, nil
}
// ctx timeout judgment
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Where("task_id IN ?", taskIDs).
Clauses(clause.Locking{Strength: "UPDATE"}).
Find(&tasks)
if result.Error != nil {
return nil, result.Error
}
return tasks, nil
}
// UpdateAsyncTaskStatus updates the status of an async task
func UpdateAsyncTaskStatus(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, status orm.AsyncTaskStatus) error {
// ctx timeout judgment
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Model(&orm.AsyncTask{}).
Where("task_id = ?", taskID).
Update("status", status)
return result.Error
}
// UpdateAsyncTaskProgress updates the progress of an async task
func UpdateAsyncTaskProgress(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, progress int) error {
// ctx timeout judgment
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Model(&orm.AsyncTask{}).
Where("task_id = ?", taskID).
Update("progress", progress)
return result.Error
}
// CompleteAsyncTask marks an async task as completed with timestamp
func CompleteAsyncTask(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, timestamp int64) error {
// ctx timeout judgment
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Model(&orm.AsyncTask{}).
Where("task_id = ?", taskID).
Updates(map[string]any{
"status": orm.AsyncTaskStatusCompleted,
"finished_at": timestamp,
"progress": 100,
})
return result.Error
}
// FailAsyncTask marks an async task as failed with timestamp
func FailAsyncTask(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, timestamp int64) error {
// ctx timeout judgment
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Model(&orm.AsyncTask{}).
Where("task_id = ?", taskID).
Updates(map[string]any{
"status": orm.AsyncTaskStatusFailed,
"finished_at": timestamp,
})
return result.Error
}
// CreateAsyncTaskResult creates a result record for an async task
func CreateAsyncTaskResult(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, result orm.JSONMap) error {
taskResult := &orm.AsyncTaskResult{
TaskID: taskID,
Result: result,
}
// ctx timeout judgment
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
resultOp := tx.WithContext(cancelCtx).Create(taskResult)
return resultOp.Error
}
// UpdateAsyncTaskResultWithError updates a task result with error information
func UpdateAsyncTaskResultWithError(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, code int, message string, detail orm.JSONMap) error {
// ctx timeout judgment
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
// Update with error information
result := tx.WithContext(cancelCtx).
Model(&orm.AsyncTaskResult{}).
Where("task_id = ?", taskID).
Updates(map[string]any{
"error_code": code,
"error_message": message,
"error_detail": detail,
"result": nil,
})
return result.Error
}
// UpdateAsyncTaskResultWithSuccess updates a task result with success information
func UpdateAsyncTaskResultWithSuccess(ctx context.Context, tx *gorm.DB, taskID uuid.UUID, result orm.JSONMap) error {
// ctx timeout judgment
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
// First try to update existing record, if not found create new one
existingResult := tx.WithContext(cancelCtx).
Where("task_id = ?", taskID).
FirstOrCreate(&orm.AsyncTaskResult{TaskID: taskID})
if existingResult.Error != nil {
return existingResult.Error
}
// Update with success information
updateResult := tx.WithContext(cancelCtx).
Model(&orm.AsyncTaskResult{}).
Where("task_id = ?", taskID).
Updates(map[string]any{
"result": result,
"error_code": nil,
"error_message": nil,
"error_detail": nil,
})
return updateResult.Error
}
// GetAsyncTaskResult retrieves the result of an async task
func GetAsyncTaskResult(ctx context.Context, tx *gorm.DB, taskID uuid.UUID) (*orm.AsyncTaskResult, error) {
var taskResult orm.AsyncTaskResult
// ctx timeout judgment
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Where("task_id = ?", taskID).
First(&taskResult)
if result.Error != nil {
if result.Error == gorm.ErrRecordNotFound {
return nil, nil
}
return nil, result.Error
}
return &taskResult, nil
}
// GetAsyncTaskResults retrieves multiple task results by task IDs
func GetAsyncTaskResults(ctx context.Context, tx *gorm.DB, taskIDs []uuid.UUID) ([]orm.AsyncTaskResult, error) {
var taskResults []orm.AsyncTaskResult
if len(taskIDs) == 0 {
return taskResults, nil
}
// ctx timeout judgment
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Where("task_id IN ?", taskIDs).
Find(&taskResults)
if result.Error != nil {
return nil, result.Error
}
return taskResults, nil
}
// GetPendingTasks retrieves pending tasks (submitted but not yet running/completed)
func GetPendingTasks(ctx context.Context, tx *gorm.DB, limit int) ([]orm.AsyncTask, error) {
var tasks []orm.AsyncTask
// ctx timeout judgment
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Where("status = ?", orm.AsyncTaskStatusSubmitted).
Order("created_at ASC").
Limit(limit).
Find(&tasks)
if result.Error != nil {
return nil, result.Error
}
return tasks, nil
}
// GetTasksByStatus retrieves tasks by status
func GetTasksByStatus(ctx context.Context, tx *gorm.DB, status orm.AsyncTaskStatus, limit int) ([]orm.AsyncTask, error) {
var tasks []orm.AsyncTask
// ctx timeout judgment
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Where("status = ?", status).
Order("created_at ASC").
Limit(limit).
Find(&tasks)
if result.Error != nil {
return nil, result.Error
}
return tasks, nil
}
// DeleteOldTasks deletes tasks older than the specified timestamp
func DeleteOldTasks(ctx context.Context, tx *gorm.DB, olderThan int64) error {
// ctx timeout judgment
cancelCtx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()
// First delete task results
result := tx.WithContext(cancelCtx).
Where("task_id IN (SELECT task_id FROM async_task WHERE created_at < ?)", olderThan).
Delete(&orm.AsyncTaskResult{})
if result.Error != nil {
return result.Error
}
// Then delete tasks
result = tx.WithContext(cancelCtx).
Where("created_at < ?", olderThan).
Delete(&orm.AsyncTask{})
return result.Error
}

View File

@ -33,7 +33,7 @@ func CreateComponentIntoDB(ctx context.Context, tx *gorm.DB, componentInfo netwo
Name: componentInfo.Name,
Context: componentInfo.Context,
Op: componentInfo.Op,
Ts: time.Now(),
TS: time.Now(),
}
result := tx.WithContext(cancelCtx).Create(&component)

View File

@ -35,7 +35,7 @@ func CreateMeasurement(ctx context.Context, tx *gorm.DB, measurementInfo network
BayUUID: globalUUID,
ComponentUUID: globalUUID,
Op: -1,
Ts: time.Now(),
TS: time.Now(),
}
result := tx.WithContext(cancelCtx).Create(&measurement)

View File

@ -53,7 +53,8 @@ func FillingLongTokenModel(ctx context.Context, tx *gorm.DB, identModel *model.L
func ParseDataIdentifierToken(ctx context.Context, tx *gorm.DB, identToken string) (model.IndentityTokenModelInterface, error) {
identSlice := strings.Split(identToken, ".")
identSliceLen := len(identSlice)
if identSliceLen == 4 {
switch identSliceLen {
case 4:
// token1.token2.token3.token4.token7
shortIndentModel := &model.ShortIdentityTokenModel{
GridTag: identSlice[0],
@ -67,7 +68,7 @@ func ParseDataIdentifierToken(ctx context.Context, tx *gorm.DB, identToken strin
return nil, err
}
return shortIndentModel, nil
} else if identSliceLen == 7 {
case 7:
// token1.token2.token3.token4.token5.token6.token7
longIndentModel := &model.LongIdentityTokenModel{
GridTag: identSlice[0],

View File

@ -19,7 +19,8 @@ func ParseAttrToken(ctx context.Context, tx *gorm.DB, attrToken, clientToken str
attrSlice := strings.Split(attrToken, ".")
attrLen := len(attrSlice)
if attrLen == 4 {
switch attrLen {
case 4:
short := &model.ShortAttrInfo{
AttrGroupName: attrSlice[2],
AttrKey: attrSlice[3],
@ -35,7 +36,7 @@ func ParseAttrToken(ctx context.Context, tx *gorm.DB, attrToken, clientToken str
}
short.AttrValue = attrValue
return short, nil
} else if attrLen == 7 {
case 7:
long := &model.LongAttrInfo{
AttrGroupName: attrSlice[5],
AttrKey: attrSlice[6],

View File

@ -4,9 +4,9 @@ package database
import (
"context"
"sync"
"time"
"modelRT/logger"
"modelRT/orm"
"gorm.io/driver/postgres"
"gorm.io/gorm"
@ -15,15 +15,11 @@ import (
var (
postgresOnce sync.Once
_globalPostgresClient *gorm.DB
_globalPostgresMu sync.RWMutex
)
// GetPostgresDBClient returns the global PostgresDB client.It's safe for concurrent use.
func GetPostgresDBClient() *gorm.DB {
_globalPostgresMu.RLock()
client := _globalPostgresClient
_globalPostgresMu.RUnlock()
return client
return _globalPostgresClient
}
// InitPostgresDBInstance return instance of PostgresDB client
@ -36,11 +32,19 @@ func InitPostgresDBInstance(ctx context.Context, PostgresDBURI string) *gorm.DB
// initPostgresDBClient return successfully initialized PostgresDB client
func initPostgresDBClient(ctx context.Context, PostgresDBURI string) *gorm.DB {
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
db, err := gorm.Open(postgres.Open(PostgresDBURI), &gorm.Config{Logger: logger.NewGormLogger()})
if err != nil {
panic(err)
}
// Auto migrate async task tables
err = db.WithContext(ctx).AutoMigrate(
&orm.AsyncTask{},
&orm.AsyncTaskResult{},
)
if err != nil {
panic(err)
}
return db
}

56
database/query_bay.go Normal file
View File

@ -0,0 +1,56 @@
// Package database define database operation functions
package database
import (
"context"
"time"
"modelRT/logger"
"modelRT/orm"
"github.com/gofrs/uuid"
"gorm.io/gorm"
"gorm.io/gorm/clause"
)
// QueryBayByUUID returns the Bay record matching bayUUID.
func QueryBayByUUID(ctx context.Context, tx *gorm.DB, bayUUID uuid.UUID) (*orm.Bay, error) {
var bay orm.Bay
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Where("bay_uuid = ?", bayUUID).
Clauses(clause.Locking{Strength: "UPDATE"}).
First(&bay)
if result.Error != nil {
return nil, result.Error
}
return &bay, nil
}
// QueryBaysByUUIDs returns Bay records matching the given UUIDs in a single query.
// The returned slice preserves database order; unmatched UUIDs are silently omitted.
func QueryBaysByUUIDs(ctx context.Context, tx *gorm.DB, bayUUIDs []uuid.UUID) ([]orm.Bay, error) {
if len(bayUUIDs) == 0 {
return nil, nil
}
var bays []orm.Bay
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Where("bay_uuid IN ?", bayUUIDs).
Clauses(clause.Locking{Strength: "UPDATE"}).
Find(&bays)
if result.Error != nil {
logger.Error(ctx, "query bays by uuids failed", "error", result.Error)
return nil, result.Error
}
return bays, nil
}

View File

@ -148,6 +148,39 @@ func QueryLongIdentModelInfoByToken(ctx context.Context, tx *gorm.DB, measTag st
return &resultComp, &meauserment, nil
}
// QueryComponentsInServiceByUUIDs returns a map of global_uuid → in_service for the
// given UUIDs. Only global_uuid and in_service columns are selected for efficiency.
func QueryComponentsInServiceByUUIDs(ctx context.Context, tx *gorm.DB, uuids []uuid.UUID) (map[uuid.UUID]bool, error) {
if len(uuids) == 0 {
return make(map[uuid.UUID]bool), nil
}
type row struct {
GlobalUUID uuid.UUID `gorm:"column:global_uuid"`
InService bool `gorm:"column:in_service"`
}
var rows []row
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Model(&orm.Component{}).
Select("global_uuid, in_service").
Where("global_uuid IN ?", uuids).
Scan(&rows)
if result.Error != nil {
return nil, result.Error
}
m := make(map[uuid.UUID]bool, len(rows))
for _, r := range rows {
m[r.GlobalUUID] = r.InService
}
return m, nil
}
// QueryShortIdentModelInfoByToken define func to query short identity model info by short token
func QueryShortIdentModelInfoByToken(ctx context.Context, tx *gorm.DB, measTag string, condition *orm.Component) (*orm.Component, *orm.Measurement, error) {
var resultComp orm.Component

View File

@ -32,71 +32,51 @@ func QueryTopologic(ctx context.Context, tx *gorm.DB) ([]orm.Topologic, error) {
return topologics, nil
}
// QueryTopologicFromDB return the result of query topologic info from DB
func QueryTopologicFromDB(ctx context.Context, tx *gorm.DB) (*diagram.MultiBranchTreeNode, error) {
// QueryTopologicByStartUUID returns all edges reachable from startUUID following
// directed uuid_from → uuid_to edges in the topologic table.
func QueryTopologicByStartUUID(ctx context.Context, tx *gorm.DB, startUUID uuid.UUID) ([]orm.Topologic, error) {
var topologics []orm.Topologic
cancelCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result := tx.WithContext(cancelCtx).
Clauses(clause.Locking{Strength: "UPDATE"}).
Raw(sql.RecursiveSQL, startUUID).
Scan(&topologics)
if result.Error != nil {
logger.Error(ctx, "query topologic by start uuid failed", "start_uuid", startUUID, "error", result.Error)
return nil, result.Error
}
return topologics, nil
}
// QueryTopologicFromDB return the result of query topologic info from DB.
// Returns the root node and a flat nodeMap for O(1) lookup by UUID.
func QueryTopologicFromDB(ctx context.Context, tx *gorm.DB) (*diagram.MultiBranchTreeNode, map[uuid.UUID]*diagram.MultiBranchTreeNode, error) {
topologicInfos, err := QueryTopologic(ctx, tx)
if err != nil {
logger.Error(ctx, "query topologic info failed", "error", err)
return nil, err
return nil, nil, err
}
tree, err := BuildMultiBranchTree(topologicInfos)
tree, nodeMap, err := BuildMultiBranchTree(topologicInfos)
if err != nil {
logger.Error(ctx, "init topologic failed", "error", err)
return nil, err
return nil, nil, err
}
return tree, nil
return tree, nodeMap, nil
}
// InitCircuitDiagramTopologic return circuit diagram topologic info from postgres
func InitCircuitDiagramTopologic(topologicNodes []orm.Topologic) error {
var rootVertex *diagram.MultiBranchTreeNode
for _, node := range topologicNodes {
if node.UUIDFrom == constants.UUIDNil {
rootVertex = diagram.NewMultiBranchTree(node.UUIDFrom)
break
}
}
if rootVertex == nil {
return fmt.Errorf("root vertex is nil")
}
for _, node := range topologicNodes {
if node.UUIDFrom == constants.UUIDNil {
nodeVertex := diagram.NewMultiBranchTree(node.UUIDTo)
rootVertex.AddChild(nodeVertex)
}
}
node := rootVertex
for _, nodeVertex := range node.Children {
nextVertexs := make([]*diagram.MultiBranchTreeNode, 0)
nextVertexs = append(nextVertexs, nodeVertex)
}
return nil
}
// TODO 电流互感器不单独划分间隔,以母线、浇筑母线、变压器为间隔原件
func IntervalBoundaryDetermine(uuid uuid.UUID) bool {
diagram.GetComponentMap(uuid.String())
// TODO 判断 component 的类型是否为间隔
// TODO 0xA1B2C3D4,高四位表示可以成为间隔的compoent类型的值为FFFF,普通 component 类型的值为 0000。低四位中前二位表示component的一级类型例如母线 PT、母联/母分、进线等,低四位中后二位表示一级类型中包含的具体类型,例如母线 PT中包含的电压互感器、隔离开关、接地开关、避雷器、带电显示器等。
num := uint32(0xA1B2C3D4) // 八位16进制数
high16 := uint16(num >> 16)
fmt.Printf("原始值: 0x%X\n", num) // 输出: 0xA1B2C3D4
fmt.Printf("高十六位: 0x%X\n", high16) // 输出: 0xA1B2
return true
}
// BuildMultiBranchTree return the multi branch tree by topologic info and component type map
func BuildMultiBranchTree(topologics []orm.Topologic) (*diagram.MultiBranchTreeNode, error) {
// BuildMultiBranchTree return the multi branch tree by topologic info.
// Returns the root node and a flat nodeMap for O(1) lookup by UUID.
func BuildMultiBranchTree(topologics []orm.Topologic) (*diagram.MultiBranchTreeNode, map[uuid.UUID]*diagram.MultiBranchTreeNode, error) {
nodeMap := make(map[uuid.UUID]*diagram.MultiBranchTreeNode, len(topologics)*2)
for _, topo := range topologics {
if _, exists := nodeMap[topo.UUIDFrom]; !exists {
// skip special uuid
if topo.UUIDTo != constants.UUIDNil {
// UUIDNil is the virtual root sentinel — skip creating a regular node for it
if topo.UUIDFrom != constants.UUIDNil {
nodeMap[topo.UUIDFrom] = &diagram.MultiBranchTreeNode{
ID: topo.UUIDFrom,
Children: make([]*diagram.MultiBranchTreeNode, 0),
@ -105,7 +85,6 @@ func BuildMultiBranchTree(topologics []orm.Topologic) (*diagram.MultiBranchTreeN
}
if _, exists := nodeMap[topo.UUIDTo]; !exists {
// skip special uuid
if topo.UUIDTo != constants.UUIDNil {
nodeMap[topo.UUIDTo] = &diagram.MultiBranchTreeNode{
ID: topo.UUIDTo,
@ -118,10 +97,13 @@ func BuildMultiBranchTree(topologics []orm.Topologic) (*diagram.MultiBranchTreeN
for _, topo := range topologics {
var parent *diagram.MultiBranchTreeNode
if topo.UUIDFrom == constants.UUIDNil {
parent = &diagram.MultiBranchTreeNode{
if _, exists := nodeMap[constants.UUIDNil]; !exists {
nodeMap[constants.UUIDNil] = &diagram.MultiBranchTreeNode{
ID: constants.UUIDNil,
Children: make([]*diagram.MultiBranchTreeNode, 0),
}
nodeMap[constants.UUIDNil] = parent
}
parent = nodeMap[constants.UUIDNil]
} else {
parent = nodeMap[topo.UUIDFrom]
}
@ -141,7 +123,7 @@ func BuildMultiBranchTree(topologics []orm.Topologic) (*diagram.MultiBranchTreeN
// return root vertex
root, exists := nodeMap[constants.UUIDNil]
if !exists {
return nil, fmt.Errorf("root node not found")
return nil, nil, fmt.Errorf("root node not found")
}
return root, nil
return root, nodeMap, nil
}

View File

@ -43,7 +43,7 @@ func UpdateComponentIntoDB(ctx context.Context, tx *gorm.DB, componentInfo netwo
Name: componentInfo.Name,
Context: componentInfo.Context,
Op: componentInfo.Op,
Ts: time.Now(),
TS: time.Now(),
}
result = tx.Model(&orm.Component{}).WithContext(cancelCtx).Where("GLOBAL_UUID = ?", component.GlobalUUID).Updates(&updateParams)

60
deploy/jaeger.yaml Normal file
View File

@ -0,0 +1,60 @@
apiVersion: v1
kind: Service
metadata:
name: jaeger
labels:
app: jaeger
spec:
ports:
- name: ui
port: 16686
targetPort: 16686
nodePort: 31686 # Jaeger UI浏览器访问 http://<NodeIP>:31686
- name: collector-http
port: 14268
targetPort: 14268
nodePort: 31268 # Jaeger 原生 HTTP collector非 OTel
- name: otlp-http
port: 4318
targetPort: 4318
nodePort: 31318 # OTLP HTTP集群外使用 <NodeIP>:31318
- name: otlp-grpc
port: 4317
targetPort: 4317
nodePort: 31317 # OTLP gRPC集群外使用 <NodeIP>:31317
selector:
app: jaeger
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jaeger
spec:
replicas: 1
selector:
matchLabels:
app: jaeger
template:
metadata:
labels:
app: jaeger
spec:
containers:
- name: jaeger
image: jaegertracing/all-in-one:1.56
env:
- name: COLLECTOR_OTLP_ENABLED
value: "true"
ports:
- containerPort: 16686 # UI
- containerPort: 14268 # Jaeger Collector
- containerPort: 4317 # OTLP gRPC
- containerPort: 4318 # OTLP HTTP
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi

View File

@ -1,3 +1,4 @@
// Package diagram provide diagram data structure and operation
package diagram
import (
@ -31,11 +32,9 @@ func UpdateAnchorValue(componentUUID string, anchorValue string) bool {
// StoreAnchorValue define func of store anchor value with componentUUID and anchor name
func StoreAnchorValue(componentUUID string, anchorValue string) {
anchorValueOverview.Store(componentUUID, anchorValue)
return
}
// DeleteAnchorValue define func of delete anchor value with componentUUID
func DeleteAnchorValue(componentUUID string) {
anchorValueOverview.Delete(componentUUID)
return
}

View File

@ -1,3 +1,4 @@
// Package diagram provide diagram data structure and operation
package diagram
import (
@ -33,11 +34,9 @@ func UpdateComponentMap(componentID int64, componentInfo *orm.Component) bool {
// StoreComponentMap define func of store circuit diagram data with component uuid and component info
func StoreComponentMap(componentUUID string, componentInfo *orm.Component) {
diagramsOverview.Store(componentUUID, componentInfo)
return
}
// DeleteComponentMap define func of delete circuit diagram data with component uuid
func DeleteComponentMap(componentUUID string) {
diagramsOverview.Delete(componentUUID)
return
}

View File

@ -1,3 +1,4 @@
// Package diagram provide diagram data structure and operation
package diagram
import (
@ -29,5 +30,4 @@ func TestHMSet(t *testing.T) {
fmt.Printf("err:%v\n", err)
}
fmt.Printf("res:%v\n", res)
return
}

View File

@ -1,3 +1,4 @@
// Package diagram provide diagram data structure and operation
package diagram
import (
@ -62,3 +63,63 @@ func (n *MultiBranchTreeNode) PrintTree(level int) {
child.PrintTree(level + 1)
}
}
// FindPath returns the ordered node sequence from startID to endID using the
// supplied nodeMap for O(1) lookup. It walks each node up to the root to find
// the LCA, then stitches the two half-paths together.
// Returns nil when either node is absent from nodeMap or no path exists.
func FindPath(startID, endID uuid.UUID, nodeMap map[uuid.UUID]*MultiBranchTreeNode) []*MultiBranchTreeNode {
startNode, ok := nodeMap[startID]
if !ok {
return nil
}
endNode, ok := nodeMap[endID]
if !ok {
return nil
}
// collect ancestors (inclusive) from a node up to the root sentinel
ancestors := func(n *MultiBranchTreeNode) []*MultiBranchTreeNode {
var chain []*MultiBranchTreeNode
for n != nil {
chain = append(chain, n)
n = n.Parent
}
return chain
}
startChain := ancestors(startNode) // [start, ..., root]
endChain := ancestors(endNode) // [end, ..., root]
// index startChain by ID for fast LCA detection
startIdx := make(map[uuid.UUID]int, len(startChain))
for i, node := range startChain {
startIdx[node.ID] = i
}
// find LCA: first node in endChain that also appears in startChain
lcaEndPos := -1
lcaStartPos := -1
for i, node := range endChain {
if j, found := startIdx[node.ID]; found {
lcaEndPos = i
lcaStartPos = j
break
}
}
if lcaEndPos < 0 {
return nil // disconnected
}
// path = startChain[0..lcaStartPos] reversed + endChain[lcaEndPos..0] reversed
path := make([]*MultiBranchTreeNode, 0, lcaStartPos+lcaEndPos+1)
for i := 0; i <= lcaStartPos; i++ {
path = append(path, startChain[i])
}
// append end-side (skip LCA to avoid duplication), reversed
for i := lcaEndPos - 1; i >= 0; i-- {
path = append(path, endChain[i])
}
return path
}

View File

@ -19,8 +19,8 @@ func NewRedisClient() *RedisClient {
}
}
// QueryByZRangeByLex define func to query real time data from redis zset
func (rc *RedisClient) QueryByZRangeByLex(ctx context.Context, key string, size int64) ([]redis.Z, error) {
// QueryByZRange define func to query real time data from redis zset
func (rc *RedisClient) QueryByZRange(ctx context.Context, key string, size int64) ([]redis.Z, error) {
client := rc.Client
args := redis.ZRangeArgs{
Key: key,

View File

@ -16,13 +16,15 @@ var (
)
// initClient define func of return successfully initialized redis client
func initClient(rCfg config.RedisConfig) *redis.Client {
func initClient(rCfg config.RedisConfig, deployEnv string) *redis.Client {
client, err := util.NewRedisClient(
rCfg.Addr,
util.WithPassword(rCfg.Password),
util.WithPassword(rCfg.Password, deployEnv),
util.WithDB(rCfg.DB),
util.WithPoolSize(rCfg.PoolSize),
util.WithTimeout(time.Duration(rCfg.Timeout)*time.Second),
util.WithConnectTimeout(time.Duration(rCfg.DialTimeout)*time.Second),
util.WithReadTimeout(time.Duration(rCfg.ReadTimeout)*time.Second),
util.WithWriteTimeout(time.Duration(rCfg.WriteTimeout)*time.Second),
)
if err != nil {
panic(err)
@ -31,9 +33,9 @@ func initClient(rCfg config.RedisConfig) *redis.Client {
}
// InitRedisClientInstance define func of return instance of redis client
func InitRedisClientInstance(rCfg config.RedisConfig) *redis.Client {
func InitRedisClientInstance(rCfg config.RedisConfig, deployEnv string) *redis.Client {
once.Do(func() {
_globalStorageClient = initClient(rCfg)
_globalStorageClient = initClient(rCfg, deployEnv)
})
return _globalStorageClient
}

View File

@ -1,3 +1,4 @@
// Package diagram provide diagram data structure and operation
package diagram
import (
@ -39,11 +40,9 @@ func UpdateGrapMap(pageID int64, graphInfo *Graph) bool {
// StoreGraphMap define func of store circuit diagram topologic data with pageID and topologic info
func StoreGraphMap(pageID int64, graphInfo *Graph) {
graphOverview.Store(pageID, graphInfo)
return
}
// DeleteGraphMap define func of delete circuit diagram topologic data with pageID
func DeleteGraphMap(pageID int64) {
graphOverview.Delete(pageID)
return
}

View File

@ -16,13 +16,15 @@ var (
)
// initClient define func of return successfully initialized redis client
func initClient(rCfg config.RedisConfig) *redis.Client {
func initClient(rCfg config.RedisConfig, deployEnv string) *redis.Client {
client, err := util.NewRedisClient(
rCfg.Addr,
util.WithPassword(rCfg.Password),
util.WithPassword(rCfg.Password, deployEnv),
util.WithDB(rCfg.DB),
util.WithPoolSize(rCfg.PoolSize),
util.WithTimeout(time.Duration(rCfg.Timeout)*time.Second),
util.WithConnectTimeout(time.Duration(rCfg.DialTimeout)*time.Second),
util.WithReadTimeout(time.Duration(rCfg.ReadTimeout)*time.Second),
util.WithWriteTimeout(time.Duration(rCfg.WriteTimeout)*time.Second),
)
if err != nil {
panic(err)
@ -31,9 +33,9 @@ func initClient(rCfg config.RedisConfig) *redis.Client {
}
// InitClientInstance define func of return instance of redis client
func InitClientInstance(rCfg config.RedisConfig) *redis.Client {
func InitClientInstance(rCfg config.RedisConfig, deployEnv string) *redis.Client {
once.Do(func() {
_globalLockerClient = initClient(rCfg)
_globalLockerClient = initClient(rCfg, deployEnv)
})
return _globalLockerClient
}

View File

@ -102,13 +102,12 @@ const docTemplate = `{
"summary": "测量点推荐(搜索框自动补全)",
"parameters": [
{
"description": "查询输入参数,例如 'trans' 或 'transformfeeder1_220.'",
"name": "request",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/network.MeasurementRecommendRequest"
}
"type": "string",
"example": "\"grid1\"",
"description": "推荐关键词,例如 'grid1' 或 'grid1.'",
"name": "input",
"in": "query",
"required": true
}
],
"responses": {
@ -176,19 +175,400 @@ const docTemplate = `{
}
}
}
},
"/monitors/data/realtime/stream/:clientID": {
"get": {
"description": "根据用户输入的clientID拉取对应的实时数据",
"tags": [
"RealTime Component Websocket"
],
"summary": "实时数据拉取 websocket api",
"responses": {}
}
},
"/monitors/data/subscriptions": {
"post": {
"description": "根据用户输入的组件token,从 modelRT 服务中开始或结束对于量测节点的实时数据的订阅",
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"tags": [
"RealTime Component"
],
"summary": "开始或结束订阅实时数据",
"parameters": [
{
"description": "量测节点实时数据订阅",
"name": "request",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/network.RealTimeSubRequest"
}
}
],
"responses": {
"2000": {
"description": "订阅实时数据结果列表",
"schema": {
"allOf": [
{
"$ref": "#/definitions/network.SuccessResponse"
},
{
"type": "object",
"properties": {
"payload": {
"$ref": "#/definitions/network.RealTimeSubPayload"
}
}
}
]
}
},
"3000": {
"description": "订阅实时数据结果列表",
"schema": {
"allOf": [
{
"$ref": "#/definitions/network.FailureResponse"
},
{
"type": "object",
"properties": {
"payload": {
"$ref": "#/definitions/network.RealTimeSubPayload"
}
}
}
]
}
}
}
}
},
"/task/async": {
"post": {
"description": "创建新的异步任务并返回任务ID任务将被提交到队列等待处理",
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"tags": [
"AsyncTask"
],
"summary": "创建异步任务",
"parameters": [
{
"description": "任务创建请求",
"name": "request",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/network.AsyncTaskCreateRequest"
}
}
],
"responses": {
"200": {
"description": "任务创建成功",
"schema": {
"allOf": [
{
"$ref": "#/definitions/network.SuccessResponse"
},
{
"type": "object",
"properties": {
"payload": {
"$ref": "#/definitions/network.AsyncTaskCreateResponse"
}
}
}
]
}
},
"400": {
"description": "请求参数错误",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
},
"500": {
"description": "服务器内部错误",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
}
}
}
},
"/task/async/results": {
"get": {
"description": "根据任务ID列表查询异步任务的状态和结果",
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"tags": [
"AsyncTask"
],
"summary": "查询异步任务结果",
"parameters": [
{
"type": "string",
"description": "任务ID列表用逗号分隔",
"name": "task_ids",
"in": "query",
"required": true
}
],
"responses": {
"200": {
"description": "查询成功",
"schema": {
"allOf": [
{
"$ref": "#/definitions/network.SuccessResponse"
},
{
"type": "object",
"properties": {
"payload": {
"$ref": "#/definitions/network.AsyncTaskResultQueryResponse"
}
}
}
]
}
},
"400": {
"description": "请求参数错误",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
},
"500": {
"description": "服务器内部错误",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
}
}
}
},
"/task/async/{task_id}": {
"get": {
"description": "根据任务ID查询异步任务的详细状态和结果",
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"tags": [
"AsyncTask"
],
"summary": "查询异步任务详情",
"parameters": [
{
"type": "string",
"description": "任务ID",
"name": "task_id",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "查询成功",
"schema": {
"allOf": [
{
"$ref": "#/definitions/network.SuccessResponse"
},
{
"type": "object",
"properties": {
"payload": {
"$ref": "#/definitions/network.AsyncTaskResult"
}
}
}
]
}
},
"400": {
"description": "请求参数错误",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
},
"404": {
"description": "任务不存在",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
},
"500": {
"description": "服务器内部错误",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
}
}
}
},
"/task/async/{task_id}/cancel": {
"post": {
"description": "取消指定ID的异步任务如果任务尚未开始执行",
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"tags": [
"AsyncTask"
],
"summary": "取消异步任务",
"parameters": [
{
"type": "string",
"description": "任务ID",
"name": "task_id",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "任务取消成功",
"schema": {
"$ref": "#/definitions/network.SuccessResponse"
}
},
"400": {
"description": "请求参数错误或任务无法取消",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
},
"404": {
"description": "任务不存在",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
},
"500": {
"description": "服务器内部错误",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
}
}
}
}
},
"definitions": {
"network.AsyncTaskCreateRequest": {
"type": "object",
"properties": {
"params": {
"description": "required: true",
"type": "object"
},
"task_type": {
"description": "required: true\nenum: TOPOLOGY_ANALYSIS, PERFORMANCE_ANALYSIS, EVENT_ANALYSIS, BATCH_IMPORT",
"type": "string",
"example": "TOPOLOGY_ANALYSIS"
}
}
},
"network.AsyncTaskCreateResponse": {
"type": "object",
"properties": {
"task_id": {
"type": "string",
"example": "123e4567-e89b-12d3-a456-426614174000"
}
}
},
"network.AsyncTaskResult": {
"type": "object",
"properties": {
"created_at": {
"type": "integer",
"example": 1741846200
},
"error_code": {
"type": "integer",
"example": 400102
},
"error_detail": {
"type": "object"
},
"error_message": {
"type": "string",
"example": "Component UUID not found"
},
"finished_at": {
"type": "integer",
"example": 1741846205
},
"progress": {
"type": "integer",
"example": 65
},
"result": {
"type": "object"
},
"status": {
"type": "string",
"example": "COMPLETED"
},
"task_id": {
"type": "string",
"example": "123e4567-e89b-12d3-a456-426614174000"
},
"task_type": {
"type": "string",
"example": "TOPOLOGY_ANALYSIS"
}
}
},
"network.AsyncTaskResultQueryResponse": {
"type": "object",
"properties": {
"tasks": {
"type": "array",
"items": {
"$ref": "#/definitions/network.AsyncTaskResult"
}
},
"total": {
"type": "integer",
"example": 3
}
}
},
"network.FailureResponse": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"example": 500
"example": 3000
},
"msg": {
"type": "string",
"example": "failed to get recommend data from redis"
"example": "process completed with partial failures"
},
"payload": {
"type": "object"
@ -216,15 +596,10 @@ const docTemplate = `{
" \"I_B_rms\"",
"\"I_C_rms\"]"
]
}
}
},
"network.MeasurementRecommendRequest": {
"type": "object",
"properties": {
"input": {
"recommended_type": {
"type": "string",
"example": "trans"
"example": "grid_tag"
}
}
},
@ -237,21 +612,93 @@ const docTemplate = `{
}
}
},
"network.RealTimeMeasurementItem": {
"type": "object",
"properties": {
"interval": {
"type": "string",
"example": "1"
},
"targets": {
"type": "array",
"items": {
"type": "string"
},
"example": [
"[\"grid1.zone1.station1.ns1.tag1.bay.I11_A_rms\"",
"\"grid1.zone1.station1.ns1.tag1.tag1.bay.I11_B_rms\"]"
]
}
}
},
"network.RealTimeSubPayload": {
"type": "object",
"properties": {
"client_id": {
"type": "string",
"example": "5d72f2d9-e33a-4f1b-9c76-88a44b9a953e"
},
"targets": {
"type": "array",
"items": {
"$ref": "#/definitions/network.TargetResult"
}
}
}
},
"network.RealTimeSubRequest": {
"type": "object",
"properties": {
"action": {
"description": "required: true\nenum: [start, stop]",
"type": "string",
"example": "start"
},
"client_id": {
"type": "string",
"example": "5d72f2d9-e33a-4f1b-9c76-88a44b9a953e"
},
"measurements": {
"description": "required: true",
"type": "array",
"items": {
"$ref": "#/definitions/network.RealTimeMeasurementItem"
}
}
}
},
"network.SuccessResponse": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"example": 200
"example": 2000
},
"msg": {
"type": "string",
"example": "success"
"example": "process completed"
},
"payload": {
"type": "object"
}
}
},
"network.TargetResult": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"example": 20000
},
"id": {
"type": "string",
"example": "grid1.zone1.station1.ns1.tag1.transformfeeder1_220.I_A_rms"
},
"msg": {
"type": "string",
"example": "subscription success"
}
}
}
}
}`

View File

@ -96,13 +96,12 @@
"summary": "测量点推荐(搜索框自动补全)",
"parameters": [
{
"description": "查询输入参数,例如 'trans' 或 'transformfeeder1_220.'",
"name": "request",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/network.MeasurementRecommendRequest"
}
"type": "string",
"example": "\"grid1\"",
"description": "推荐关键词,例如 'grid1' 或 'grid1.'",
"name": "input",
"in": "query",
"required": true
}
],
"responses": {
@ -170,19 +169,400 @@
}
}
}
},
"/monitors/data/realtime/stream/:clientID": {
"get": {
"description": "根据用户输入的clientID拉取对应的实时数据",
"tags": [
"RealTime Component Websocket"
],
"summary": "实时数据拉取 websocket api",
"responses": {}
}
},
"/monitors/data/subscriptions": {
"post": {
"description": "根据用户输入的组件token,从 modelRT 服务中开始或结束对于量测节点的实时数据的订阅",
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"tags": [
"RealTime Component"
],
"summary": "开始或结束订阅实时数据",
"parameters": [
{
"description": "量测节点实时数据订阅",
"name": "request",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/network.RealTimeSubRequest"
}
}
],
"responses": {
"2000": {
"description": "订阅实时数据结果列表",
"schema": {
"allOf": [
{
"$ref": "#/definitions/network.SuccessResponse"
},
{
"type": "object",
"properties": {
"payload": {
"$ref": "#/definitions/network.RealTimeSubPayload"
}
}
}
]
}
},
"3000": {
"description": "订阅实时数据结果列表",
"schema": {
"allOf": [
{
"$ref": "#/definitions/network.FailureResponse"
},
{
"type": "object",
"properties": {
"payload": {
"$ref": "#/definitions/network.RealTimeSubPayload"
}
}
}
]
}
}
}
}
},
"/task/async": {
"post": {
"description": "创建新的异步任务并返回任务ID任务将被提交到队列等待处理",
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"tags": [
"AsyncTask"
],
"summary": "创建异步任务",
"parameters": [
{
"description": "任务创建请求",
"name": "request",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/network.AsyncTaskCreateRequest"
}
}
],
"responses": {
"200": {
"description": "任务创建成功",
"schema": {
"allOf": [
{
"$ref": "#/definitions/network.SuccessResponse"
},
{
"type": "object",
"properties": {
"payload": {
"$ref": "#/definitions/network.AsyncTaskCreateResponse"
}
}
}
]
}
},
"400": {
"description": "请求参数错误",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
},
"500": {
"description": "服务器内部错误",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
}
}
}
},
"/task/async/results": {
"get": {
"description": "根据任务ID列表查询异步任务的状态和结果",
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"tags": [
"AsyncTask"
],
"summary": "查询异步任务结果",
"parameters": [
{
"type": "string",
"description": "任务ID列表用逗号分隔",
"name": "task_ids",
"in": "query",
"required": true
}
],
"responses": {
"200": {
"description": "查询成功",
"schema": {
"allOf": [
{
"$ref": "#/definitions/network.SuccessResponse"
},
{
"type": "object",
"properties": {
"payload": {
"$ref": "#/definitions/network.AsyncTaskResultQueryResponse"
}
}
}
]
}
},
"400": {
"description": "请求参数错误",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
},
"500": {
"description": "服务器内部错误",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
}
}
}
},
"/task/async/{task_id}": {
"get": {
"description": "根据任务ID查询异步任务的详细状态和结果",
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"tags": [
"AsyncTask"
],
"summary": "查询异步任务详情",
"parameters": [
{
"type": "string",
"description": "任务ID",
"name": "task_id",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "查询成功",
"schema": {
"allOf": [
{
"$ref": "#/definitions/network.SuccessResponse"
},
{
"type": "object",
"properties": {
"payload": {
"$ref": "#/definitions/network.AsyncTaskResult"
}
}
}
]
}
},
"400": {
"description": "请求参数错误",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
},
"404": {
"description": "任务不存在",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
},
"500": {
"description": "服务器内部错误",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
}
}
}
},
"/task/async/{task_id}/cancel": {
"post": {
"description": "取消指定ID的异步任务如果任务尚未开始执行",
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"tags": [
"AsyncTask"
],
"summary": "取消异步任务",
"parameters": [
{
"type": "string",
"description": "任务ID",
"name": "task_id",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "任务取消成功",
"schema": {
"$ref": "#/definitions/network.SuccessResponse"
}
},
"400": {
"description": "请求参数错误或任务无法取消",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
},
"404": {
"description": "任务不存在",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
},
"500": {
"description": "服务器内部错误",
"schema": {
"$ref": "#/definitions/network.FailureResponse"
}
}
}
}
}
},
"definitions": {
"network.AsyncTaskCreateRequest": {
"type": "object",
"properties": {
"params": {
"description": "required: true",
"type": "object"
},
"task_type": {
"description": "required: true\nenum: TOPOLOGY_ANALYSIS, PERFORMANCE_ANALYSIS, EVENT_ANALYSIS, BATCH_IMPORT",
"type": "string",
"example": "TOPOLOGY_ANALYSIS"
}
}
},
"network.AsyncTaskCreateResponse": {
"type": "object",
"properties": {
"task_id": {
"type": "string",
"example": "123e4567-e89b-12d3-a456-426614174000"
}
}
},
"network.AsyncTaskResult": {
"type": "object",
"properties": {
"created_at": {
"type": "integer",
"example": 1741846200
},
"error_code": {
"type": "integer",
"example": 400102
},
"error_detail": {
"type": "object"
},
"error_message": {
"type": "string",
"example": "Component UUID not found"
},
"finished_at": {
"type": "integer",
"example": 1741846205
},
"progress": {
"type": "integer",
"example": 65
},
"result": {
"type": "object"
},
"status": {
"type": "string",
"example": "COMPLETED"
},
"task_id": {
"type": "string",
"example": "123e4567-e89b-12d3-a456-426614174000"
},
"task_type": {
"type": "string",
"example": "TOPOLOGY_ANALYSIS"
}
}
},
"network.AsyncTaskResultQueryResponse": {
"type": "object",
"properties": {
"tasks": {
"type": "array",
"items": {
"$ref": "#/definitions/network.AsyncTaskResult"
}
},
"total": {
"type": "integer",
"example": 3
}
}
},
"network.FailureResponse": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"example": 500
"example": 3000
},
"msg": {
"type": "string",
"example": "failed to get recommend data from redis"
"example": "process completed with partial failures"
},
"payload": {
"type": "object"
@ -210,15 +590,10 @@
" \"I_B_rms\"",
"\"I_C_rms\"]"
]
}
}
},
"network.MeasurementRecommendRequest": {
"type": "object",
"properties": {
"input": {
"recommended_type": {
"type": "string",
"example": "trans"
"example": "grid_tag"
}
}
},
@ -231,21 +606,93 @@
}
}
},
"network.RealTimeMeasurementItem": {
"type": "object",
"properties": {
"interval": {
"type": "string",
"example": "1"
},
"targets": {
"type": "array",
"items": {
"type": "string"
},
"example": [
"[\"grid1.zone1.station1.ns1.tag1.bay.I11_A_rms\"",
"\"grid1.zone1.station1.ns1.tag1.tag1.bay.I11_B_rms\"]"
]
}
}
},
"network.RealTimeSubPayload": {
"type": "object",
"properties": {
"client_id": {
"type": "string",
"example": "5d72f2d9-e33a-4f1b-9c76-88a44b9a953e"
},
"targets": {
"type": "array",
"items": {
"$ref": "#/definitions/network.TargetResult"
}
}
}
},
"network.RealTimeSubRequest": {
"type": "object",
"properties": {
"action": {
"description": "required: true\nenum: [start, stop]",
"type": "string",
"example": "start"
},
"client_id": {
"type": "string",
"example": "5d72f2d9-e33a-4f1b-9c76-88a44b9a953e"
},
"measurements": {
"description": "required: true",
"type": "array",
"items": {
"$ref": "#/definitions/network.RealTimeMeasurementItem"
}
}
}
},
"network.SuccessResponse": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"example": 200
"example": 2000
},
"msg": {
"type": "string",
"example": "success"
"example": "process completed"
},
"payload": {
"type": "object"
}
}
},
"network.TargetResult": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"example": 20000
},
"id": {
"type": "string",
"example": "grid1.zone1.station1.ns1.tag1.transformfeeder1_220.I_A_rms"
},
"msg": {
"type": "string",
"example": "subscription success"
}
}
}
}
}

View File

@ -1,12 +1,71 @@
basePath: /api/v1
definitions:
network.AsyncTaskCreateRequest:
properties:
params:
description: 'required: true'
type: object
task_type:
description: |-
required: true
enum: TOPOLOGY_ANALYSIS, PERFORMANCE_ANALYSIS, EVENT_ANALYSIS, BATCH_IMPORT
example: TOPOLOGY_ANALYSIS
type: string
type: object
network.AsyncTaskCreateResponse:
properties:
task_id:
example: 123e4567-e89b-12d3-a456-426614174000
type: string
type: object
network.AsyncTaskResult:
properties:
created_at:
example: 1741846200
type: integer
error_code:
example: 400102
type: integer
error_detail:
type: object
error_message:
example: Component UUID not found
type: string
finished_at:
example: 1741846205
type: integer
progress:
example: 65
type: integer
result:
type: object
status:
example: COMPLETED
type: string
task_id:
example: 123e4567-e89b-12d3-a456-426614174000
type: string
task_type:
example: TOPOLOGY_ANALYSIS
type: string
type: object
network.AsyncTaskResultQueryResponse:
properties:
tasks:
items:
$ref: '#/definitions/network.AsyncTaskResult'
type: array
total:
example: 3
type: integer
type: object
network.FailureResponse:
properties:
code:
example: 500
example: 3000
type: integer
msg:
example: failed to get recommend data from redis
example: process completed with partial failures
type: string
payload:
type: object
@ -27,11 +86,8 @@ definitions:
items:
type: string
type: array
type: object
network.MeasurementRecommendRequest:
properties:
input:
example: trans
recommended_type:
example: grid_tag
type: string
type: object
network.RealTimeDataPayload:
@ -40,17 +96,69 @@ definitions:
description: TODO 增加example tag
type: object
type: object
network.RealTimeMeasurementItem:
properties:
interval:
example: "1"
type: string
targets:
example:
- '["grid1.zone1.station1.ns1.tag1.bay.I11_A_rms"'
- '"grid1.zone1.station1.ns1.tag1.tag1.bay.I11_B_rms"]'
items:
type: string
type: array
type: object
network.RealTimeSubPayload:
properties:
client_id:
example: 5d72f2d9-e33a-4f1b-9c76-88a44b9a953e
type: string
targets:
items:
$ref: '#/definitions/network.TargetResult'
type: array
type: object
network.RealTimeSubRequest:
properties:
action:
description: |-
required: true
enum: [start, stop]
example: start
type: string
client_id:
example: 5d72f2d9-e33a-4f1b-9c76-88a44b9a953e
type: string
measurements:
description: 'required: true'
items:
$ref: '#/definitions/network.RealTimeMeasurementItem'
type: array
type: object
network.SuccessResponse:
properties:
code:
example: 200
example: 2000
type: integer
msg:
example: success
example: process completed
type: string
payload:
type: object
type: object
network.TargetResult:
properties:
code:
example: 20000
type: integer
id:
example: grid1.zone1.station1.ns1.tag1.transformfeeder1_220.I_A_rms
type: string
msg:
example: subscription success
type: string
type: object
host: localhost:8080
info:
contact:
@ -110,12 +218,12 @@ paths:
- application/json
description: 根据用户输入的字符串,从 Redis 中查询可能的测量点或结构路径,并提供推荐列表。
parameters:
- description: 查询输入参数,例如 'trans' 或 'transformfeeder1_220.'
in: body
name: request
- description: 推荐关键词,例如 'grid1' 或 'grid1.'
example: '"grid1"'
in: query
name: input
required: true
schema:
$ref: '#/definitions/network.MeasurementRecommendRequest'
type: string
produces:
- application/json
responses:
@ -160,4 +268,187 @@ paths:
summary: load circuit diagram info
tags:
- load circuit_diagram
/monitors/data/realtime/stream/:clientID:
get:
description: 根据用户输入的clientID拉取对应的实时数据
responses: {}
summary: 实时数据拉取 websocket api
tags:
- RealTime Component Websocket
/monitors/data/subscriptions:
post:
consumes:
- application/json
description: 根据用户输入的组件token,从 modelRT 服务中开始或结束对于量测节点的实时数据的订阅
parameters:
- description: 量测节点实时数据订阅
in: body
name: request
required: true
schema:
$ref: '#/definitions/network.RealTimeSubRequest'
produces:
- application/json
responses:
"2000":
description: 订阅实时数据结果列表
schema:
allOf:
- $ref: '#/definitions/network.SuccessResponse'
- properties:
payload:
$ref: '#/definitions/network.RealTimeSubPayload'
type: object
"3000":
description: 订阅实时数据结果列表
schema:
allOf:
- $ref: '#/definitions/network.FailureResponse'
- properties:
payload:
$ref: '#/definitions/network.RealTimeSubPayload'
type: object
summary: 开始或结束订阅实时数据
tags:
- RealTime Component
/task/async:
post:
consumes:
- application/json
description: 创建新的异步任务并返回任务ID任务将被提交到队列等待处理
parameters:
- description: 任务创建请求
in: body
name: request
required: true
schema:
$ref: '#/definitions/network.AsyncTaskCreateRequest'
produces:
- application/json
responses:
"200":
description: 任务创建成功
schema:
allOf:
- $ref: '#/definitions/network.SuccessResponse'
- properties:
payload:
$ref: '#/definitions/network.AsyncTaskCreateResponse'
type: object
"400":
description: 请求参数错误
schema:
$ref: '#/definitions/network.FailureResponse'
"500":
description: 服务器内部错误
schema:
$ref: '#/definitions/network.FailureResponse'
summary: 创建异步任务
tags:
- AsyncTask
/task/async/{task_id}:
get:
consumes:
- application/json
description: 根据任务ID查询异步任务的详细状态和结果
parameters:
- description: 任务ID
in: path
name: task_id
required: true
type: string
produces:
- application/json
responses:
"200":
description: 查询成功
schema:
allOf:
- $ref: '#/definitions/network.SuccessResponse'
- properties:
payload:
$ref: '#/definitions/network.AsyncTaskResult'
type: object
"400":
description: 请求参数错误
schema:
$ref: '#/definitions/network.FailureResponse'
"404":
description: 任务不存在
schema:
$ref: '#/definitions/network.FailureResponse'
"500":
description: 服务器内部错误
schema:
$ref: '#/definitions/network.FailureResponse'
summary: 查询异步任务详情
tags:
- AsyncTask
/task/async/{task_id}/cancel:
post:
consumes:
- application/json
description: 取消指定ID的异步任务如果任务尚未开始执行
parameters:
- description: 任务ID
in: path
name: task_id
required: true
type: string
produces:
- application/json
responses:
"200":
description: 任务取消成功
schema:
$ref: '#/definitions/network.SuccessResponse'
"400":
description: 请求参数错误或任务无法取消
schema:
$ref: '#/definitions/network.FailureResponse'
"404":
description: 任务不存在
schema:
$ref: '#/definitions/network.FailureResponse'
"500":
description: 服务器内部错误
schema:
$ref: '#/definitions/network.FailureResponse'
summary: 取消异步任务
tags:
- AsyncTask
/task/async/results:
get:
consumes:
- application/json
description: 根据任务ID列表查询异步任务的状态和结果
parameters:
- description: 任务ID列表用逗号分隔
in: query
name: task_ids
required: true
type: string
produces:
- application/json
responses:
"200":
description: 查询成功
schema:
allOf:
- $ref: '#/definitions/network.SuccessResponse'
- properties:
payload:
$ref: '#/definitions/network.AsyncTaskResultQueryResponse'
type: object
"400":
description: 请求参数错误
schema:
$ref: '#/definitions/network.FailureResponse'
"500":
description: 服务器内部错误
schema:
$ref: '#/definitions/network.FailureResponse'
summary: 查询异步任务结果
tags:
- AsyncTask
swagger: "2.0"

65
go.mod
View File

@ -1,26 +1,33 @@
module modelRT
go 1.24
go 1.25.0
require (
github.com/DATA-DOG/go-sqlmock v1.5.2
github.com/RediSearch/redisearch-go/v2 v2.1.1
github.com/bitly/go-simplejson v0.5.1
github.com/gin-gonic/gin v1.10.0
github.com/gin-contrib/cors v1.7.6
github.com/gin-gonic/gin v1.10.1
github.com/gofrs/uuid v4.4.0+incompatible
github.com/gomodule/redigo v1.8.9
github.com/gorilla/websocket v1.5.3
github.com/json-iterator/go v1.1.12
github.com/natefinch/lumberjack v2.0.0+incompatible
github.com/panjf2000/ants/v2 v2.10.0
github.com/rabbitmq/amqp091-go v1.10.0
github.com/redis/go-redis/v9 v9.7.3
github.com/spf13/viper v1.19.0
github.com/stretchr/testify v1.9.0
github.com/stretchr/testify v1.11.1
github.com/swaggo/files v1.0.1
github.com/swaggo/gin-swagger v1.6.0
github.com/swaggo/swag v1.16.4
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78
go.opentelemetry.io/contrib/propagators/b3 v1.43.0
go.opentelemetry.io/otel v1.43.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0
go.opentelemetry.io/otel/sdk v1.43.0
go.opentelemetry.io/otel/trace v1.43.0
go.uber.org/zap v1.27.0
golang.org/x/sys v0.28.0
gorm.io/driver/mysql v1.5.7
gorm.io/driver/postgres v1.5.9
gorm.io/gorm v1.25.12
@ -29,25 +36,29 @@ require (
require (
github.com/BurntSushi/toml v1.4.0 // indirect
github.com/KyleBanks/depth v1.2.1 // indirect
github.com/bytedance/sonic v1.12.5 // indirect
github.com/bytedance/sonic/loader v0.2.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/cloudwego/base64x v0.1.4 // indirect
github.com/cloudwego/iasm v0.2.0 // indirect
github.com/bytedance/sonic v1.13.3 // indirect
github.com/bytedance/sonic/loader v0.2.4 // indirect
github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cloudwego/base64x v0.1.5 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.7 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.9 // indirect
github.com/gin-contrib/sse v1.1.0 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/spec v0.21.0 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.23.0 // indirect
github.com/go-playground/validator/v10 v10.26.0 // indirect
github.com/go-sql-driver/mysql v1.7.0 // indirect
github.com/goccy/go-json v0.10.3 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a // indirect
@ -56,7 +67,7 @@ require (
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.9 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/magiconair/properties v1.8.7 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
@ -64,7 +75,7 @@ require (
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/pelletier/go-toml/v2 v2.2.3 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/sagikazarmark/locafero v0.4.0 // indirect
github.com/sagikazarmark/slog-shim v0.1.0 // indirect
@ -74,16 +85,24 @@ require (
github.com/spf13/pflag v1.0.5 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.12 // indirect
github.com/ugorji/go/codec v1.3.0 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 // indirect
go.opentelemetry.io/otel/metric v1.43.0 // indirect
go.opentelemetry.io/proto/otlp v1.10.0 // indirect
go.uber.org/multierr v1.10.0 // indirect
golang.org/x/arch v0.12.0 // indirect
golang.org/x/crypto v0.30.0 // indirect
golang.org/x/arch v0.18.0 // indirect
golang.org/x/crypto v0.49.0 // indirect
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 // indirect
golang.org/x/net v0.32.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/tools v0.28.0 // indirect
google.golang.org/protobuf v1.35.2 // indirect
golang.org/x/net v0.52.0 // indirect
golang.org/x/sync v0.20.0 // indirect
golang.org/x/sys v0.42.0 // indirect
golang.org/x/text v0.35.0 // indirect
golang.org/x/tools v0.42.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 // indirect
google.golang.org/grpc v1.80.0 // indirect
google.golang.org/protobuf v1.36.11 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect

144
go.sum
View File

@ -12,16 +12,17 @@ github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
github.com/bytedance/sonic v1.12.5 h1:hoZxY8uW+mT+OpkcUWw4k0fDINtOcVavEsGfzwzFU/w=
github.com/bytedance/sonic v1.12.5/go.mod h1:B8Gt/XvtZ3Fqj+iSKMypzymZxw/FVwgIGKzMzT9r/rk=
github.com/bytedance/sonic v1.13.3 h1:MS8gmaH16Gtirygw7jV91pDCN33NyMrPbN7qiYhEsF0=
github.com/bytedance/sonic v1.13.3/go.mod h1:o68xyaF9u2gvVBuGHPlUVCy+ZfmNNO5ETf1+KgkJhz4=
github.com/bytedance/sonic/loader v0.1.1/go.mod h1:ncP89zfokxS5LZrJxl5z0UJcsk4M4yY2JpfqGeCtNLU=
github.com/bytedance/sonic/loader v0.2.1 h1:1GgorWTqf12TA8mma4DDSbaQigE2wOgQo7iCjjJv3+E=
github.com/bytedance/sonic/loader v0.2.1/go.mod h1:ncP89zfokxS5LZrJxl5z0UJcsk4M4yY2JpfqGeCtNLU=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cloudwego/base64x v0.1.4 h1:jwCgWpFanWmN8xoIUHa2rtzmkd5J2plF/dnLS6Xd/0Y=
github.com/cloudwego/base64x v0.1.4/go.mod h1:0zlkT4Wn5C6NdauXdJRhSKRlJvmclQ1hhJgA0rcu/8w=
github.com/cloudwego/iasm v0.2.0 h1:1KNIy1I1H9hNNFEEH3DVnI4UujN+1zjpuk6gwHLTssg=
github.com/bytedance/sonic/loader v0.2.4 h1:ZWCw4stuXUsn1/+zQDqeE7JKP+QO47tz7QCNan80NzY=
github.com/bytedance/sonic/loader v0.2.4/go.mod h1:N8A3vUdtUebEY2/VQC0MyhYeKUFosQU6FxH2JmUe6VI=
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cloudwego/base64x v0.1.5 h1:XPciSp1xaq2VCSt6lF0phncD4koWyULpl5bUxbfCyP4=
github.com/cloudwego/base64x v0.1.5/go.mod h1:0zlkT4Wn5C6NdauXdJRhSKRlJvmclQ1hhJgA0rcu/8w=
github.com/cloudwego/iasm v0.2.0/go.mod h1:8rXZaNYT2n95jn+zTI1sDr+IgcD2GVs0nlbbQPiEFhY=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@ -33,14 +34,21 @@ github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHk
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/gabriel-vasile/mimetype v1.4.7 h1:SKFKl7kD0RiPdbht0s7hFtjl489WcQ1VyPW8ZzUMYCA=
github.com/gabriel-vasile/mimetype v1.4.7/go.mod h1:GDlAgAyIRT27BhFl53XNAFtfjzOkLaF35JdEG0P7LtU=
github.com/gabriel-vasile/mimetype v1.4.9 h1:5k+WDwEsD9eTLL8Tz3L0VnmVh9QxGjRmjBvAG7U/oYY=
github.com/gabriel-vasile/mimetype v1.4.9/go.mod h1:WnSQhFKJuBlRyLiKohA/2DtIlPFAbguNaG7QCHcyGok=
github.com/gin-contrib/cors v1.7.6 h1:3gQ8GMzs1Ylpf70y8bMw4fVpycXIeX1ZemuSQIsnQQY=
github.com/gin-contrib/cors v1.7.6/go.mod h1:Ulcl+xN4jel9t1Ry8vqph23a60FwH9xVLd+3ykmTjOk=
github.com/gin-contrib/gzip v0.0.6 h1:NjcunTcGAj5CO1gn4N8jHOSIeRFHIbn51z6K+xaN4d4=
github.com/gin-contrib/gzip v0.0.6/go.mod h1:QOJlmV2xmayAjkNS2Y8NQsMneuRShOU/kjovCXNuzzk=
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
github.com/gin-gonic/gin v1.10.0 h1:nTuyha1TYqgedzytsKYqna+DfLos46nTv2ygFy86HFU=
github.com/gin-gonic/gin v1.10.0/go.mod h1:4PMNQiOhvDRa013RKVbsiNwoyezlm2rm0uX/T7kzp5Y=
github.com/gin-contrib/sse v1.1.0 h1:n0w2GMuUpWDVp7qSpvze6fAu9iRxJY4Hmj6AmBOU05w=
github.com/gin-contrib/sse v1.1.0/go.mod h1:hxRZ5gVpWMT7Z0B0gSNYqqsSCNIJMjzvm6fqCz9vjwM=
github.com/gin-gonic/gin v1.10.1 h1:T0ujvqyCSqRopADpgPgiTT63DUQVSfojyME59Ei63pQ=
github.com/gin-gonic/gin v1.10.1/go.mod h1:4PMNQiOhvDRa013RKVbsiNwoyezlm2rm0uX/T7kzp5Y=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
github.com/go-openapi/jsonreference v0.21.0 h1:Rs+Y7hSXT83Jacb7kFyjn4ijOuVGSvOdF2+tg1TRrwQ=
@ -55,21 +63,27 @@ github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/o
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-playground/validator/v10 v10.23.0 h1:/PwmTwZhS0dPkav3cdK9kV1FsAmrL8sThn8IHr/sO+o=
github.com/go-playground/validator/v10 v10.23.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM=
github.com/go-playground/validator/v10 v10.26.0 h1:SP05Nqhjcvz81uJaRfEV0YBSSSGMc/iMaVtFbr3Sw2k=
github.com/go-playground/validator/v10 v10.26.0/go.mod h1:I5QpIEbmr8On7W0TktmJAumgzX4CA1XNl4ZmDuVHKKo=
github.com/go-sql-driver/mysql v1.7.0 h1:ueSltNNllEqE3qcWBTD0iQd3IpL/6U+mJxLkazJ7YPc=
github.com/go-sql-driver/mysql v1.7.0/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=
github.com/goccy/go-json v0.10.3 h1:KZ5WoDbxAIgm2HNbYckL0se1fHD6rz5j4ywS6ebzDqA=
github.com/goccy/go-json v0.10.3/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/gofrs/uuid v4.4.0+incompatible h1:3qXRTX8/NbyulANqlc0lchS1gqAVxRgsuW1YrTJupqA=
github.com/gofrs/uuid v4.4.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/gomodule/redigo v1.8.9 h1:Sl3u+2BI/kk+VEatbj0scLdrFhjPmbxOc1myhDP41ws=
github.com/gomodule/redigo v1.8.9/go.mod h1:7ArFNvsTjH8GMMzB4uy1snslv2BwmginuMs06a1uzZE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 h1:HWRh5R2+9EifMyIHV7ZV+MIZqgz+PMpZ14Jynv3O2Zs=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0/go.mod h1:JfhWUomR1baixubs02l85lZYYOm7LV6om4ceouMv45c=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
@ -90,8 +104,8 @@ github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnr
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/kisielk/sqlstruct v0.0.0-20201105191214-5f3e10d3ab46/go.mod h1:yyMNCyc/Ib3bDTKd379tNMpB/7/H5TjM2Y9QJ5THLbE=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.9 h1:66ze0taIn2H33fBvCkXuv9BmCwDfafmiIVpKV9kKGuY=
github.com/klauspost/cpuid/v2 v2.2.9/go.mod h1:rqkxqrZ1EhYM9G+hXH7YdowN5R5RGN6NK4QwQ3WMXF8=
github.com/klauspost/cpuid/v2 v2.2.10 h1:tBs3QSyvjDyFTq3uoc/9xFpCuOsJQFNPiAhYdw2skhE=
github.com/klauspost/cpuid/v2 v2.2.10/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/knz/go-libedit v1.10.1/go.mod h1:MZTVkCWyz0oBc7JOWP3wNAzd002ZbM/5hgShxwh4x8M=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
@ -116,15 +130,17 @@ github.com/natefinch/lumberjack v2.0.0+incompatible h1:4QJd3OLAMgj7ph+yZTuX13Ld4
github.com/natefinch/lumberjack v2.0.0+incompatible/go.mod h1:Wi9p2TTF5DG5oU+6YfsmYQpsTIOm0B1VNzQg9Mw6nPk=
github.com/panjf2000/ants/v2 v2.10.0 h1:zhRg1pQUtkyRiOFo2Sbqwjp0GfBNo9cUY2/Grpx1p+8=
github.com/panjf2000/ants/v2 v2.10.0/go.mod h1:7ZxyxsqE4vvW0M7LSD8aI3cKwgFhBHbxnlN8mDqHa1I=
github.com/pelletier/go-toml/v2 v2.2.3 h1:YmeHyLY8mFWbdkNWwpr+qIL2bEqT0o95WSdkNHvL12M=
github.com/pelletier/go-toml/v2 v2.2.3/go.mod h1:MfCQTFTvCcUyyvvwm1+G6H/jORL20Xlb6rzQu9GuUkc=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rabbitmq/amqp091-go v1.10.0 h1:STpn5XsHlHGcecLmMFCtg7mqq0RnD+zFr4uzukfVhBw=
github.com/rabbitmq/amqp091-go v1.10.0/go.mod h1:Hy4jKW5kQART1u+JkDTF9YYOQUHXqMuhrgxOEeS7G4o=
github.com/redis/go-redis/v9 v9.7.3 h1:YpPyAayJV+XErNsatSElgRZZVCwXX9QzkKYNvO7x0wM=
github.com/redis/go-redis/v9 v9.7.3/go.mod h1:bGUrSggJ9X9GUmZpZNEOQKaANxSGgOEBRltRTZHSvrA=
github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M=
github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/sagikazarmark/locafero v0.4.0 h1:HApY1R9zGo4DBgr7dqsTH/JJxLTTsOt7u6keLGt6kNQ=
github.com/sagikazarmark/locafero v0.4.0/go.mod h1:Pe1W6UlPYUk/+wc/6KFhbORCfqzgYEpgQ3O5fPuL3H4=
github.com/sagikazarmark/slog-shim v0.1.0 h1:diDBnUNK9N/354PgrxMywXnAwEr1QZcOr6gto+ugjYE=
@ -148,8 +164,8 @@ github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/swaggo/files v1.0.1 h1:J1bVJ4XHZNq0I46UU90611i9/YzdrF7x92oX1ig5IdE=
@ -160,37 +176,59 @@ github.com/swaggo/swag v1.16.4 h1:clWJtd9LStiG3VeijiCfOVODP6VpHtKdQy9ELFG3s1A=
github.com/swaggo/swag v1.16.4/go.mod h1:VBsHJRsDvfYvqoiMKnsdwhNV9LEMHgEDZcyVYX0sxPg=
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
github.com/ugorji/go/codec v1.2.12 h1:9LC83zGrHhuUA9l16C9AHXAqEV/2wBQ4nkvumAE65EE=
github.com/ugorji/go/codec v1.2.12/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
github.com/ugorji/go/codec v1.3.0 h1:Qd2W2sQawAfG8XSvzwhBeoGq71zXOC/Q1E9y/wUcsUA=
github.com/ugorji/go/codec v1.3.0/go.mod h1:pRBVtBSKl77K30Bv8R2P+cLSGaTtex6fsA2Wjqmfxj4=
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 h1:ilQV1hzziu+LLM3zUTJ0trRztfwgjqKnBWNtSRkbmwM=
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/contrib/propagators/b3 v1.43.0 h1:CETqV3QLLPTy5yNrqyMr41VnAOOD4lsRved7n4QG00A=
go.opentelemetry.io/contrib/propagators/b3 v1.43.0/go.mod h1:Q4mCiCdziYzpNR0g+6UqVotAlCDZdzz6L8jwY4knOrw=
go.opentelemetry.io/otel v1.43.0 h1:mYIM03dnh5zfN7HautFE4ieIig9amkNANT+xcVxAj9I=
go.opentelemetry.io/otel v1.43.0/go.mod h1:JuG+u74mvjvcm8vj8pI5XiHy1zDeoCS2LB1spIq7Ay0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0 h1:88Y4s2C8oTui1LGM6bTWkw0ICGcOLCAI5l6zsD1j20k=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.43.0/go.mod h1:Vl1/iaggsuRlrHf/hfPJPvVag77kKyvrLeD10kpMl+A=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0 h1:3iZJKlCZufyRzPzlQhUIWVmfltrXuGyfjREgGP3UUjc=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.43.0/go.mod h1:/G+nUPfhq2e+qiXMGxMwumDrP5jtzU+mWN7/sjT2rak=
go.opentelemetry.io/otel/metric v1.43.0 h1:d7638QeInOnuwOONPp4JAOGfbCEpYb+K6DVWvdxGzgM=
go.opentelemetry.io/otel/metric v1.43.0/go.mod h1:RDnPtIxvqlgO8GRW18W6Z/4P462ldprJtfxHxyKd2PY=
go.opentelemetry.io/otel/sdk v1.43.0 h1:pi5mE86i5rTeLXqoF/hhiBtUNcrAGHLKQdhg4h4V9Dg=
go.opentelemetry.io/otel/sdk v1.43.0/go.mod h1:P+IkVU3iWukmiit/Yf9AWvpyRDlUeBaRg6Y+C58QHzg=
go.opentelemetry.io/otel/sdk/metric v1.43.0 h1:S88dyqXjJkuBNLeMcVPRFXpRw2fuwdvfCGLEo89fDkw=
go.opentelemetry.io/otel/sdk/metric v1.43.0/go.mod h1:C/RJtwSEJ5hzTiUz5pXF1kILHStzb9zFlIEe85bhj6A=
go.opentelemetry.io/otel/trace v1.43.0 h1:BkNrHpup+4k4w+ZZ86CZoHHEkohws8AY+WTX09nk+3A=
go.opentelemetry.io/otel/trace v1.43.0/go.mod h1:/QJhyVBUUswCphDVxq+8mld+AvhXZLhe+8WVFxiFff0=
go.opentelemetry.io/proto/otlp v1.10.0 h1:IQRWgT5srOCYfiWnpqUYz9CVmbO8bFmKcwYxpuCSL2g=
go.opentelemetry.io/proto/otlp v1.10.0/go.mod h1:/CV4QoCR/S9yaPj8utp3lvQPoqMtxXdzn7ozvvozVqk=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.10.0 h1:S0h4aNzvfcFsC3dRF1jLoaov7oRaKqRGC/pUEJ2yvPQ=
go.uber.org/multierr v1.10.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/arch v0.12.0 h1:UsYJhbzPYGsT0HbEdmYcqtCv8UNGvnaL561NnIUvaKg=
golang.org/x/arch v0.12.0/go.mod h1:FEVrYAQjsQXMVJ1nsMoVVXPZg6p2JE2mx8psSWTDQys=
golang.org/x/arch v0.18.0 h1:WN9poc33zL4AzGxqf8VtpKUnGvMi8O9lhNyBMF/85qc=
golang.org/x/arch v0.18.0/go.mod h1:bdwinDaKcfZUGpH09BB7ZmOfhalA8lQdzl62l8gGWsk=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.30.0 h1:RwoQn3GkWiMkzlX562cLB7OxWvjH1L8xutO2WoJcRoY=
golang.org/x/crypto v0.30.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4=
golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 h1:GoHiUyI/Tp2nVkLI2mCxVkOjsbSXD66ic0XW0js0R9g=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9/go.mod h1:S2oDrQGGwySpoQPVqRShND87VCbxmc6bL1Yd2oYrm6k=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.22.0 h1:D4nJWe9zXqHOmWqj4VMOJhvzj7bEZg4wEYa759z1pH4=
golang.org/x/mod v0.22.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY=
golang.org/x/mod v0.33.0 h1:tHFzIWbBifEmbwtGz65eaWyGiGZatSrT9prnU8DbVL8=
golang.org/x/mod v0.33.0/go.mod h1:swjeQEj+6r7fODbD2cqrnje9PnziFuw4bmLbBZFrQ5w=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.32.0 h1:ZqPmj8Kzc+Y6e0+skZsuACbx+wzMgo5MQsJh9Qd6aYI=
golang.org/x/net v0.32.0/go.mod h1:CwU0IoeOlnQQWJ6ioyFrfRuomB8GKF6KbYXZVyeXNfs=
golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0=
golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.20.0 h1:e0PTpb7pjO8GAtTs2dQ6jYa5BWYlMuX047Dco/pItO4=
golang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@ -198,8 +236,8 @@ golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo=
golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
@ -207,16 +245,24 @@ golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.35.0 h1:JOVx6vVDFokkpaq1AEptVzLTpDe9KGpj5tR4/X+ybL8=
golang.org/x/text v0.35.0/go.mod h1:khi/HExzZJ2pGnjenulevKNX1W67CUy0AsXcNubPGCA=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.28.0 h1:WuB6qZ4RPCQo5aP3WdKZS7i595EdWqWR8vqJTlwTVK8=
golang.org/x/tools v0.28.0/go.mod h1:dcIOrVd3mfQKTgrDVQHqCPMWy6lnhfhtX3hLXYVLfRw=
golang.org/x/tools v0.42.0 h1:uNgphsn75Tdz5Ji2q36v/nsFSfR/9BRFvqhGBaJGd5k=
golang.org/x/tools v0.42.0/go.mod h1:Ma6lCIwGZvHK6XtgbswSoWroEkhugApmsXyrUmBhfr0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.35.2 h1:8Ar7bF+apOIoThw1EdZl0p1oWvMqTHmpA2fRTyZO8io=
google.golang.org/protobuf v1.35.2/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9 h1:VPWxll4HlMw1Vs/qXtN7BvhZqsS9cdAittCNvVENElA=
google.golang.org/genproto/googleapis/api v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:7QBABkRtR8z+TEnmXTqIqwJLlzrZKVfAUm7tY3yGv0M=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 h1:m8qni9SQFH0tJc1X0vmnpw/0t+AImlSvp30sEupozUg=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM=
google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=

View File

@ -5,10 +5,10 @@ import (
"net/http"
"strconv"
"modelRT/alert"
"modelRT/constants"
"modelRT/logger"
"modelRT/network"
"modelRT/real-time-data/alert"
"github.com/gin-gonic/gin"
)

View File

@ -0,0 +1,119 @@
// Package handler provides HTTP handlers for various endpoints.
package handler
import (
"net/http"
"time"
"modelRT/database"
"modelRT/logger"
"modelRT/network"
"modelRT/orm"
"github.com/gin-gonic/gin"
"github.com/gofrs/uuid"
"gorm.io/gorm"
)
// AsyncTaskCancelHandler handles cancellation of an async task
// @Summary 取消异步任务
// @Description 取消指定ID的异步任务如果任务尚未开始执行
// @Tags AsyncTask
// @Accept json
// @Produce json
// @Param task_id path string true "任务ID"
// @Success 200 {object} network.SuccessResponse "任务取消成功"
// @Failure 400 {object} network.FailureResponse "请求参数错误或任务无法取消"
// @Failure 404 {object} network.FailureResponse "任务不存在"
// @Failure 500 {object} network.FailureResponse "服务器内部错误"
// @Router /task/async/{task_id}/cancel [post]
func AsyncTaskCancelHandler(c *gin.Context) {
ctx := c.Request.Context()
// Parse task ID from path parameter
taskIDStr := c.Param("task_id")
if taskIDStr == "" {
logger.Error(ctx, "task_id parameter is required")
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: "task_id parameter is required",
})
return
}
taskID, err := uuid.FromString(taskIDStr)
if err != nil {
logger.Error(ctx, "invalid task ID format", "task_id", taskIDStr, "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: "invalid task ID format",
})
return
}
pgClient := database.GetPostgresDBClient()
if pgClient == nil {
logger.Error(ctx, "database connection not found in context")
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusInternalServerError,
Msg: "database connection error",
})
return
}
// Query task from database
asyncTask, err := database.GetAsyncTaskByID(ctx, pgClient, taskID)
if err != nil {
if err == gorm.ErrRecordNotFound {
logger.Error(ctx, "async task not found", "task_id", taskID)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusNotFound,
Msg: "task not found",
})
return
}
logger.Error(ctx, "failed to query async task from database", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusInternalServerError,
Msg: "failed to query task",
})
return
}
// Check if task can be cancelled (only SUBMITTED tasks can be cancelled)
if asyncTask.Status != orm.AsyncTaskStatusSubmitted {
logger.Error(ctx, "task cannot be cancelled", "task_id", taskID, "status", asyncTask.Status)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: "task cannot be cancelled (already running or completed)",
})
return
}
// Update task status to failed with cancellation reason
timestamp := time.Now().Unix()
err = database.FailAsyncTask(ctx, pgClient, taskID, timestamp)
if err != nil {
logger.Error(ctx, "failed to cancel async task", "task_id", taskID, "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusInternalServerError,
Msg: "failed to cancel task",
})
return
}
// Update task result with cancellation error
err = database.UpdateAsyncTaskResultWithError(ctx, pgClient, taskID, 40003, "task cancelled by user", orm.JSONMap{
"cancelled_at": timestamp,
"cancelled_by": "user",
})
if err != nil {
logger.Error(ctx, "failed to update task result with cancellation error", "task_id", taskID, "error", err)
// Continue anyway since task is already marked as failed
}
c.JSON(http.StatusOK, network.SuccessResponse{
Code: 2000,
Msg: "task cancelled successfully",
})
}

View File

@ -0,0 +1,157 @@
// Package handler provides HTTP handlers for various endpoints.
package handler
import (
"modelRT/constants"
"modelRT/database"
"modelRT/logger"
"modelRT/network"
"modelRT/orm"
"modelRT/task"
"github.com/gin-gonic/gin"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/propagation"
)
// AsyncTaskCreateHandler handles creation of asynchronous tasks
// @Summary 创建异步任务
// @Description 创建新的异步任务并返回任务ID任务将被提交到队列等待处理
// @Tags AsyncTask
// @Accept json
// @Produce json
// @Param request body network.AsyncTaskCreateRequest true "任务创建请求"
// @Success 200 {object} network.SuccessResponse{payload=network.AsyncTaskCreateResponse} "任务创建成功"
// @Failure 400 {object} network.FailureResponse "请求参数错误"
// @Failure 500 {object} network.FailureResponse "服务器内部错误"
// @Router /task/async [post]
func AsyncTaskCreateHandler(c *gin.Context) {
ctx := c.Request.Context()
var request network.AsyncTaskCreateRequest
if err := c.ShouldBindJSON(&request); err != nil {
logger.Error(ctx, "unmarshal async task create request failed", "error", err)
renderRespFailure(c, constants.RespCodeInvalidParams, "invalid request parameters", nil)
return
}
// validate task type
if !orm.IsValidAsyncTaskType(request.TaskType) {
logger.Error(ctx, "check task type invalid", "task_type", request.TaskType)
renderRespFailure(c, constants.RespCodeInvalidParams, "invalid task type", nil)
return
}
// validate task parameters based on task type
if !validateTaskParams(request.TaskType, request.Params) {
logger.Error(ctx, "check task parameters invalid", "task_type", request.TaskType, "params", request.Params)
renderRespFailure(c, constants.RespCodeInvalidParams, "invalid task parameters", nil)
return
}
pgClient := database.GetPostgresDBClient()
if pgClient == nil {
logger.Error(ctx, "database connection not found in context")
renderRespFailure(c, constants.RespCodeServerError, "database connection error", nil)
return
}
// create task in database
taskType := orm.AsyncTaskType(request.TaskType)
params := orm.JSONMap(request.Params)
asyncTask, err := database.CreateAsyncTask(ctx, pgClient, taskType, params)
if err != nil {
logger.Error(ctx, "create async task in database failed", "error", err)
renderRespFailure(c, constants.RespCodeServerError, "failed to create task", nil)
return
}
// enqueue task to channel for async publishing to RabbitMQ
msg := task.NewTaskQueueMessageWithPriority(asyncTask.TaskID, task.TaskType(request.TaskType), 5)
// propagate the current OTel span context so the async chain stays on the same trace
carrier := make(map[string]string)
otel.GetTextMapPropagator().Inject(ctx, propagation.MapCarrier(carrier))
msg.TraceCarrier = carrier
msg.Params = request.Params
task.TaskMsgChan <- msg
logger.Info(ctx, "task enqueued to channel", "task_id", asyncTask.TaskID, "queue", constants.TaskQueueName)
logger.Info(ctx, "async task created success", "task_id", asyncTask.TaskID, "task_type", request.TaskType)
// return success response
payload := genAsyncTaskCreatePayload(asyncTask.TaskID.String())
renderRespSuccess(c, constants.RespCodeSuccess, "task created successfully", payload)
}
func validateTaskParams(taskType string, params map[string]any) bool {
switch taskType {
case string(orm.AsyncTaskTypeTopologyAnalysis):
return validateTopologyAnalysisParams(params)
case string(orm.AsyncTaskTypePerformanceAnalysis):
return validatePerformanceAnalysisParams(params)
case string(orm.AsyncTaskTypeEventAnalysis):
return validateEventAnalysisParams(params)
case string(orm.AsyncTaskTypeBatchImport):
return validateBatchImportParams(params)
case string(orm.AsyncTaskTypeTest):
return validateTestTaskParams(params)
default:
return false
}
}
func validateTopologyAnalysisParams(params map[string]any) bool {
if v, ok := params["start_component_uuid"]; !ok || v == "" {
return false
}
if v, ok := params["end_component_uuid"]; !ok || v == "" {
return false
}
// check_in_service is optional; validate type when present
if v, exists := params["check_in_service"]; exists {
if _, isBool := v.(bool); !isBool {
return false
}
}
return true
}
func validatePerformanceAnalysisParams(params map[string]any) bool {
// Check required parameters for performance analysis
if componentIDs, ok := params["component_ids"]; !ok {
return false
} else if ids, isSlice := componentIDs.([]interface{}); !isSlice || len(ids) == 0 {
return false
}
return true
}
func validateEventAnalysisParams(params map[string]any) bool {
// Check required parameters for event analysis
if eventType, ok := params["event_type"]; !ok || eventType == "" {
return false
}
return true
}
func validateBatchImportParams(params map[string]any) bool {
// Check required parameters for batch import
if filePath, ok := params["file_path"]; !ok || filePath == "" {
return false
}
return true
}
func validateTestTaskParams(params map[string]any) bool {
// Test task has optional parameters, all are valid
// sleep_duration defaults to 60 seconds if not provided
return true
}
func genAsyncTaskCreatePayload(taskID string) map[string]any {
payload := map[string]any{
"task_id": taskID,
}
return payload
}

View File

@ -0,0 +1,54 @@
// Package handler provides HTTP handlers for various endpoints.
package handler
import (
"net/http"
"modelRT/database"
"modelRT/logger"
"modelRT/network"
"github.com/gin-gonic/gin"
)
// AsyncTaskProgressUpdateHandler handles updating task progress (internal use, not exposed via API)
func AsyncTaskProgressUpdateHandler(c *gin.Context) {
ctx := c.Request.Context()
var request network.AsyncTaskProgressUpdate
if err := c.ShouldBindJSON(&request); err != nil {
logger.Error(ctx, "failed to unmarshal async task progress update request", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: "invalid request parameters",
})
return
}
pgClient := database.GetPostgresDBClient()
if pgClient == nil {
logger.Error(ctx, "database connection not found in context")
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusInternalServerError,
Msg: "database connection error",
})
return
}
// Update task progress
err := database.UpdateAsyncTaskProgress(ctx, pgClient, request.TaskID, request.Progress)
if err != nil {
logger.Error(ctx, "failed to update async task progress", "task_id", request.TaskID, "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusInternalServerError,
Msg: "failed to update task progress",
})
return
}
c.JSON(http.StatusOK, network.SuccessResponse{
Code: 2000,
Msg: "task progress updated successfully",
Payload: nil,
})
}

View File

@ -0,0 +1,124 @@
// Package handler provides HTTP handlers for various endpoints.
package handler
import (
"net/http"
"modelRT/database"
"modelRT/logger"
"modelRT/network"
"github.com/gin-gonic/gin"
"github.com/gofrs/uuid"
"gorm.io/gorm"
)
// AsyncTaskResultDetailHandler handles detailed query of a single async task result
// @Summary 查询异步任务详情
// @Description 根据任务ID查询异步任务的详细状态和结果
// @Tags AsyncTask
// @Accept json
// @Produce json
// @Param task_id path string true "任务ID"
// @Success 200 {object} network.SuccessResponse{payload=network.AsyncTaskResult} "查询成功"
// @Failure 400 {object} network.FailureResponse "请求参数错误"
// @Failure 404 {object} network.FailureResponse "任务不存在"
// @Failure 500 {object} network.FailureResponse "服务器内部错误"
// @Router /task/async/{task_id} [get]
func AsyncTaskResultDetailHandler(c *gin.Context) {
ctx := c.Request.Context()
// Parse task ID from path parameter
taskIDStr := c.Param("task_id")
if taskIDStr == "" {
logger.Error(ctx, "task_id parameter is required")
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: "task_id parameter is required",
})
return
}
taskID, err := uuid.FromString(taskIDStr)
if err != nil {
logger.Error(ctx, "invalid task ID format", "task_id", taskIDStr, "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: "invalid task ID format",
})
return
}
pgClient := database.GetPostgresDBClient()
if pgClient == nil {
logger.Error(ctx, "database connection not found in context")
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusInternalServerError,
Msg: "database connection error",
})
return
}
// Query task from database
asyncTask, err := database.GetAsyncTaskByID(ctx, pgClient, taskID)
if err != nil {
if err == gorm.ErrRecordNotFound {
logger.Error(ctx, "async task not found", "task_id", taskID)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusNotFound,
Msg: "task not found",
})
return
}
logger.Error(ctx, "failed to query async task from database", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusInternalServerError,
Msg: "failed to query task",
})
return
}
// Query task result from database
taskResult, err := database.GetAsyncTaskResult(ctx, pgClient, taskID)
if err != nil {
logger.Error(ctx, "failed to query async task result from database", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusInternalServerError,
Msg: "failed to query task result",
})
return
}
// Convert to response format
responseTask := network.AsyncTaskResult{
TaskID: asyncTask.TaskID,
TaskType: string(asyncTask.TaskType),
Status: string(asyncTask.Status),
CreatedAt: asyncTask.CreatedAt,
FinishedAt: asyncTask.FinishedAt,
Progress: asyncTask.Progress,
}
// Add result or error information if available
if taskResult != nil {
if taskResult.Result != nil {
responseTask.Result = map[string]any(taskResult.Result)
}
if taskResult.ErrorCode != nil {
responseTask.ErrorCode = taskResult.ErrorCode
}
if taskResult.ErrorMessage != nil {
responseTask.ErrorMessage = taskResult.ErrorMessage
}
if taskResult.ErrorDetail != nil {
responseTask.ErrorDetail = map[string]any(taskResult.ErrorDetail)
}
}
// Return success response
c.JSON(http.StatusOK, network.SuccessResponse{
Code: 2000,
Msg: "query completed",
Payload: responseTask,
})
}

View File

@ -0,0 +1,182 @@
// Package handler provides HTTP handlers for various endpoints.
package handler
import (
"net/http"
"strings"
"modelRT/database"
"modelRT/logger"
"modelRT/network"
"modelRT/orm"
"github.com/gin-gonic/gin"
"github.com/gofrs/uuid"
)
// AsyncTaskResultQueryHandler handles querying of asynchronous task results
// @Summary 查询异步任务结果
// @Description 根据任务ID列表查询异步任务的状态和结果
// @Tags AsyncTask
// @Accept json
// @Produce json
// @Param task_ids query string true "任务ID列表用逗号分隔"
// @Success 200 {object} network.SuccessResponse{payload=network.AsyncTaskResultQueryResponse} "查询成功"
// @Failure 400 {object} network.FailureResponse "请求参数错误"
// @Failure 500 {object} network.FailureResponse "服务器内部错误"
// @Router /task/async/results [get]
func AsyncTaskResultQueryHandler(c *gin.Context) {
ctx := c.Request.Context()
// Parse task IDs from query parameter
taskIDsParam := c.Query("task_ids")
if taskIDsParam == "" {
logger.Error(ctx, "task_ids parameter is required")
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: "task_ids parameter is required",
})
return
}
// Parse comma-separated task IDs
var taskIDs []uuid.UUID
taskIDStrs := splitCommaSeparated(taskIDsParam)
for _, taskIDStr := range taskIDStrs {
taskID, err := uuid.FromString(taskIDStr)
if err != nil {
logger.Error(ctx, "invalid task ID format", "task_id", taskIDStr, "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: "invalid task ID format",
})
return
}
taskIDs = append(taskIDs, taskID)
}
if len(taskIDs) == 0 {
logger.Error(ctx, "no valid task IDs provided")
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: "no valid task IDs provided",
})
return
}
pgClient := database.GetPostgresDBClient()
if pgClient == nil {
logger.Error(ctx, "database connection not found in context")
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusInternalServerError,
Msg: "database connection error",
})
return
}
// Query tasks from database
asyncTasks, err := database.GetAsyncTasksByIDs(ctx, pgClient, taskIDs)
if err != nil {
logger.Error(ctx, "failed to query async tasks from database", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusInternalServerError,
Msg: "failed to query tasks",
})
return
}
// Query task results from database
taskResults, err := database.GetAsyncTaskResults(ctx, pgClient, taskIDs)
if err != nil {
logger.Error(ctx, "failed to query async task results from database", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusInternalServerError,
Msg: "failed to query task results",
})
return
}
// Create a map of task results for easy lookup
taskResultMap := make(map[uuid.UUID]orm.AsyncTaskResult)
for _, result := range taskResults {
taskResultMap[result.TaskID] = result
}
// Convert to response format
var responseTasks []network.AsyncTaskResult
for _, asyncTask := range asyncTasks {
taskResult := network.AsyncTaskResult{
TaskID: asyncTask.TaskID,
TaskType: string(asyncTask.TaskType),
Status: string(asyncTask.Status),
CreatedAt: asyncTask.CreatedAt,
FinishedAt: asyncTask.FinishedAt,
Progress: asyncTask.Progress,
}
// Add result or error information if available
if result, exists := taskResultMap[asyncTask.TaskID]; exists {
if result.Result != nil {
taskResult.Result = map[string]any(result.Result)
}
if result.ErrorCode != nil {
taskResult.ErrorCode = result.ErrorCode
}
if result.ErrorMessage != nil {
taskResult.ErrorMessage = result.ErrorMessage
}
if result.ErrorDetail != nil {
taskResult.ErrorDetail = map[string]any(result.ErrorDetail)
}
}
responseTasks = append(responseTasks, taskResult)
}
// Return success response
c.JSON(http.StatusOK, network.SuccessResponse{
Code: 2000,
Msg: "query completed",
Payload: network.AsyncTaskResultQueryResponse{
Total: len(responseTasks),
Tasks: responseTasks,
},
})
}
func splitCommaSeparated(s string) []string {
var result []string
var current strings.Builder
inQuotes := false
escape := false
for _, ch := range s {
if escape {
current.WriteRune(ch)
escape = false
continue
}
switch ch {
case '\\':
escape = true
case '"':
inQuotes = !inQuotes
case ',':
if !inQuotes {
result = append(result, strings.TrimSpace(current.String()))
current.Reset()
} else {
current.WriteRune(ch)
}
default:
current.WriteRune(ch)
}
}
if current.Len() > 0 {
result = append(result, strings.TrimSpace(current.String()))
}
return result
}

View File

@ -0,0 +1,89 @@
// Package handler provides HTTP handlers for various endpoints.
package handler
import (
"net/http"
"modelRT/database"
"modelRT/logger"
"modelRT/network"
"modelRT/orm"
"github.com/gin-gonic/gin"
)
// AsyncTaskStatusUpdateHandler handles updating task status (internal use, not exposed via API)
func AsyncTaskStatusUpdateHandler(c *gin.Context) {
ctx := c.Request.Context()
var request network.AsyncTaskStatusUpdate
if err := c.ShouldBindJSON(&request); err != nil {
logger.Error(ctx, "failed to unmarshal async task status update request", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: "invalid request parameters",
})
return
}
// Validate status
validStatus := map[string]bool{
string(orm.AsyncTaskStatusSubmitted): true,
string(orm.AsyncTaskStatusRunning): true,
string(orm.AsyncTaskStatusCompleted): true,
string(orm.AsyncTaskStatusFailed): true,
}
if !validStatus[request.Status] {
logger.Error(ctx, "invalid task status", "status", request.Status)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: "invalid task status",
})
return
}
pgClient := database.GetPostgresDBClient()
if pgClient == nil {
logger.Error(ctx, "database connection not found in context")
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusInternalServerError,
Msg: "database connection error",
})
return
}
// Update task status
status := orm.AsyncTaskStatus(request.Status)
err := database.UpdateAsyncTaskStatus(ctx, pgClient, request.TaskID, status)
if err != nil {
logger.Error(ctx, "failed to update async task status", "task_id", request.TaskID, "status", request.Status, "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusInternalServerError,
Msg: "failed to update task status",
})
return
}
// If task is completed or failed, update finished_at timestamp
if request.Status == string(orm.AsyncTaskStatusCompleted) {
err = database.CompleteAsyncTask(ctx, pgClient, request.TaskID, request.Timestamp)
} else if request.Status == string(orm.AsyncTaskStatusFailed) {
err = database.FailAsyncTask(ctx, pgClient, request.TaskID, request.Timestamp)
}
if err != nil {
logger.Error(ctx, "failed to update async task completion timestamp", "task_id", request.TaskID, "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusInternalServerError,
Msg: "failed to update task completion timestamp",
})
return
}
c.JSON(http.StatusOK, network.SuccessResponse{
Code: 2000,
Msg: "task status updated successfully",
Payload: nil,
})
}

View File

@ -3,7 +3,7 @@ package handler
import (
"net/http"
"modelRT/constants"
"modelRT/common"
"modelRT/diagram"
"modelRT/logger"
"modelRT/network"
@ -16,7 +16,7 @@ func AttrDeleteHandler(c *gin.Context) {
var request network.AttrDeleteRequest
clientToken := c.GetString("client_token")
if clientToken == "" {
err := constants.ErrGetClientToken
err := common.ErrGetClientToken
logger.Error(c, "failed to get client token from context", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{

View File

@ -3,7 +3,7 @@ package handler
import (
"net/http"
"modelRT/constants"
"modelRT/common"
"modelRT/database"
"modelRT/logger"
"modelRT/network"
@ -17,7 +17,7 @@ func AttrGetHandler(c *gin.Context) {
clientToken := c.GetString("client_token")
if clientToken == "" {
err := constants.ErrGetClientToken
err := common.ErrGetClientToken
logger.Error(c, "failed to get client token from context", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{

View File

@ -3,7 +3,7 @@ package handler
import (
"net/http"
"modelRT/constants"
"modelRT/common"
"modelRT/diagram"
"modelRT/logger"
"modelRT/network"
@ -17,7 +17,7 @@ func AttrSetHandler(c *gin.Context) {
clientToken := c.GetString("client_token")
if clientToken == "" {
err := constants.ErrGetClientToken
err := common.ErrGetClientToken
logger.Error(c, "failed to get client token from context", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{

View File

@ -7,6 +7,7 @@ import (
"fmt"
"net/http"
"modelRT/common"
"modelRT/constants"
"modelRT/database"
"modelRT/diagram"
@ -43,7 +44,7 @@ func DiagramNodeLinkHandler(c *gin.Context) {
var request network.DiagramNodeLinkRequest
clientToken := c.GetString("client_token")
if clientToken == "" {
err := constants.ErrGetClientToken
err := common.ErrGetClientToken
logger.Error(c, "failed to get client token from context", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
@ -167,7 +168,7 @@ func processLinkSetData(ctx context.Context, action string, level int, prevLinkS
err2 = prevLinkSet.SREM(prevMember)
}
default:
err := constants.ErrUnsupportedLinkAction
err := common.ErrUnsupportedLinkAction
logger.Error(ctx, "unsupport diagram node link process action", "action", action, "error", err)
return err
}

View File

@ -30,3 +30,14 @@ func renderRespSuccess(c *gin.Context, code int, msg string, payload any) {
}
c.JSON(http.StatusOK, resp)
}
func renderWSRespFailure(c *gin.Context, code int, msg string, payload any) {
resp := network.WSResponse{
Code: code,
Msg: msg,
}
if payload != nil {
resp.Payload = payload
}
c.JSON(http.StatusOK, resp)
}

View File

@ -6,10 +6,10 @@ import (
"net/http"
"strconv"
"modelRT/alert"
"modelRT/constants"
"modelRT/logger"
"modelRT/network"
"modelRT/real-time-data/alert"
"github.com/gin-gonic/gin"
)

View File

@ -4,7 +4,7 @@ package handler
import (
"net/http"
"modelRT/constants"
"modelRT/common"
"modelRT/database"
"modelRT/diagram"
"modelRT/logger"
@ -19,7 +19,7 @@ func MeasurementGetHandler(c *gin.Context) {
clientToken := c.GetString("client_token")
if clientToken == "" {
err := constants.ErrGetClientToken
err := common.ErrGetClientToken
logger.Error(c, "failed to get client token from context", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{

View File

@ -6,6 +6,7 @@ import (
"fmt"
"net/http"
"modelRT/common"
"modelRT/constants"
"modelRT/database"
"modelRT/diagram"
@ -20,7 +21,7 @@ func MeasurementLinkHandler(c *gin.Context) {
var request network.MeasurementLinkRequest
clientToken := c.GetString("client_token")
if clientToken == "" {
err := constants.ErrGetClientToken
err := common.ErrGetClientToken
logger.Error(c, "failed to get client token from context", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
@ -93,7 +94,7 @@ func MeasurementLinkHandler(c *gin.Context) {
logger.Error(c, "del measurement link process operation failed", "measurement_id", measurementID, "action", action, "error", err)
}
default:
err = constants.ErrUnsupportedLinkAction
err = common.ErrUnsupportedLinkAction
logger.Error(c, "unsupport measurement link process action", "measurement_id", measurementID, "action", action, "error", err)
}

View File

@ -39,20 +39,14 @@ func PullRealTimeDataHandler(c *gin.Context) {
if clientID == "" {
err := fmt.Errorf("clientID is missing from the path")
logger.Error(c, "query clientID from path failed", "error", err, "url", c.Request.RequestURI)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: err.Error(),
})
renderWSRespFailure(c, constants.RespCodeInvalidParams, err.Error(), nil)
return
}
conn, err := pullUpgrader.Upgrade(c.Writer, c.Request, nil)
if err != nil {
logger.Error(c, "upgrade http protocol to websocket protocal failed", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: err.Error(),
})
logger.Error(c, "upgrade http protocol to websocket protocol failed", "error", err)
renderWSRespFailure(c, constants.RespCodeServerError, err.Error(), nil)
return
}
defer conn.Close()
@ -60,9 +54,18 @@ func PullRealTimeDataHandler(c *gin.Context) {
ctx, cancel := context.WithCancel(c.Request.Context())
defer cancel()
conn.SetCloseHandler(func(code int, text string) error {
logger.Info(c.Request.Context(), "websocket processor shutdown trigger",
"clientID", clientID, "code", code, "reason", text)
// call cancel to notify other goroutines to stop working
cancel()
return nil
})
// TODO[BACKPRESSURE-ISSUE] 先期使用固定大容量对扇入模型进行定义 #1
fanInChan := make(chan network.RealTimePullTarget, constants.FanInChanMaxSize)
sendChan := make(chan []network.RealTimePullTarget, constants.SendChanBufferSize)
sendChan := make(chan network.WSResponse, constants.SendChanBufferSize)
go processTargetPolling(ctx, globalSubState, clientID, fanInChan, sendChan)
go readClientMessages(ctx, conn, clientID, cancel)
@ -79,52 +82,33 @@ func PullRealTimeDataHandler(c *gin.Context) {
select {
case targetData, ok := <-fanInChan:
if !ok {
logger.Error(ctx, "fanInChan closed unexpectedly", "client_id", clientID)
sendChan <- network.WSResponse{
Code: constants.RespCodeServerError,
Msg: "abnormal shutdown of data fan-in channel",
}
return
}
buffer = append(buffer, targetData)
if len(buffer) >= bufferMaxSize {
// buffer is full, send immediately
select {
case sendChan <- buffer:
default:
logger.Warn(ctx, "sendChan is full, dropping aggregated data batch (buffer is full)", "client_id", clientID)
}
// reset buffer
buffer = make([]network.RealTimePullTarget, 0, bufferMaxSize)
// reset the ticker to prevent it from triggering immediately after the ticker is sent
flushBuffer(ctx, &buffer, sendChan, clientID, "buffer_full")
ticker.Reset(sendMaxInterval)
}
case <-ticker.C:
if len(buffer) > 0 {
// when the ticker is triggered, all data in the send buffer is sent
select {
case sendChan <- buffer:
default:
logger.Warn(ctx, "sendChan is full, dropping aggregated data batch (ticker is triggered)", "client_id", clientID)
}
// reset buffer
buffer = make([]network.RealTimePullTarget, 0, bufferMaxSize)
flushBuffer(ctx, &buffer, sendChan, clientID, "ticker_timeout")
}
case <-ctx.Done():
// send the last remaining data
// last refresh before exiting
if len(buffer) > 0 {
select {
case sendChan <- buffer:
default:
logger.Warn(ctx, "sendChan is full, cannot send last remaining data during shutdown.", "client_id", clientID)
flushBuffer(ctx, &buffer, sendChan, clientID, "shutdown")
}
}
logger.Info(ctx, "pullRealTimeDataHandler exiting as context is done.", "client_id", clientID)
return
}
}
}
// readClientMessages 负责持续监听客户端发送的消息(例如 Ping/Pong, Close Frame, 或控制命令)
// readClientMessages define func to responsible for continuously listening for messages sent by clients (such as Ping/Pong, Close Frame, or control commands)
func readClientMessages(ctx context.Context, conn *websocket.Conn, clientID string, cancel context.CancelFunc) {
// conn.SetReadLimit(512)
for {
@ -149,54 +133,47 @@ func readClientMessages(ctx context.Context, conn *websocket.Conn, clientID stri
}
}
// sendAggregateRealTimeDataStream define func to responsible for continuously pushing aggregate real-time data to the client
func sendAggregateRealTimeDataStream(conn *websocket.Conn, targetsData []network.RealTimePullTarget) error {
if len(targetsData) == 0 {
return nil
func flushBuffer(ctx context.Context, buffer *[]network.RealTimePullTarget, sendChan chan<- network.WSResponse, clientID string, reason string) {
if len(*buffer) == 0 {
return
}
response := network.SuccessResponse{
Code: 200,
Msg: "success",
resp := network.WSResponse{
Code: constants.RespCodeSuccess,
Msg: "process completed",
Payload: network.RealTimePullPayload{
Targets: targetsData,
Targets: *buffer,
},
}
return conn.WriteJSON(response)
select {
case sendChan <- resp:
default:
logger.Warn(ctx, "sendChan blocked, dropping data batch", "client_id", clientID, "reason", reason)
}
*buffer = make([]network.RealTimePullTarget, 0, constants.SendMaxBatchSize)
}
// sendDataStream define func to manages a dedicated goroutine to push data batches or system signals to the websocket client
func sendDataStream(ctx context.Context, conn *websocket.Conn, clientID string, sendChan <-chan []network.RealTimePullTarget, cancel context.CancelFunc) {
logger.Info(ctx, "start dedicated websocket sender goroutine", "client_id", clientID)
for targetsData := range sendChan {
// TODO 使用 constants.SysCtrlPrefix + switch-case 形式应对可能的业务扩展
if len(targetsData) == 1 && targetsData[0].ID == constants.SysCtrlAllRemoved {
err := conn.WriteJSON(map[string]any{
"code": 2101,
"msg": "all targets removed in given client_id",
"payload": map[string]int{
"active_targets_count": 0,
},
})
if err != nil {
logger.Error(ctx, "send all targets removed system signal failed", "client_id", clientID, "error", err)
cancel()
}
continue
func sendDataStream(ctx context.Context, conn *websocket.Conn, clientID string, sendChan <-chan network.WSResponse, cancel context.CancelFunc) {
defer func() {
if r := recover(); r != nil {
logger.Error(ctx, "sendDataStream recovered from panic", "err", r)
}
}()
if err := sendAggregateRealTimeDataStream(conn, targetsData); err != nil {
logger.Error(ctx, "send the real time aggregate data failed in sender goroutine", "client_id", clientID, "error", err)
logger.Info(ctx, "start dedicated websocket sender goroutine", "client_id", clientID)
for resp := range sendChan {
if err := conn.WriteJSON(resp); err != nil {
logger.Error(ctx, "websocket write failed", "client_id", clientID, "error", err)
cancel()
return
}
}
logger.Info(ctx, "sender goroutine exiting as channel is closed", "client_id", clientID)
}
// processTargetPolling define func to process target in subscription map and data is continuously retrieved from redis based on the target
func processTargetPolling(ctx context.Context, s *SharedSubState, clientID string, fanInChan chan network.RealTimePullTarget, sendChan chan<- []network.RealTimePullTarget) {
// ensure the fanInChan will not leak
defer close(fanInChan)
func processTargetPolling(ctx context.Context, s *SharedSubState, clientID string, fanInChan chan network.RealTimePullTarget, sendChan chan<- network.WSResponse) {
logger.Info(ctx, fmt.Sprintf("start processing real time data polling for clientID:%s", clientID))
stopChanMap := make(map[string]chan struct{})
s.globalMutex.RLock()
@ -383,7 +360,7 @@ func updateTargets(ctx context.Context, config *RealTimeSubConfig, stopChanMap m
}
// removeTargets define func to stops running polling goroutines for targets that were removed
func removeTargets(ctx context.Context, stopChanMap map[string]chan struct{}, removeTargets []string, sendChan chan<- []network.RealTimePullTarget) {
func removeTargets(ctx context.Context, stopChanMap map[string]chan struct{}, removeTargets []string, sendChan chan<- network.WSResponse) {
for _, target := range removeTargets {
stopChan, exists := stopChanMap[target]
if !exists {
@ -402,17 +379,18 @@ func removeTargets(ctx context.Context, stopChanMap map[string]chan struct{}, re
}
}
func sendSpecialStatusToClient(ctx context.Context, sendChan chan<- []network.RealTimePullTarget) {
specialTarget := network.RealTimePullTarget{
ID: constants.SysCtrlAllRemoved,
Datas: []network.RealTimePullData{},
func sendSpecialStatusToClient(ctx context.Context, sendChan chan<- network.WSResponse) {
// TODO 使用 constants.SysCtrlPrefix + switch-case 形式应对可能的业务扩展
resp := network.WSResponse{
Code: constants.RespCodeSuccessWithNoSub,
Msg: "all targets removed",
Payload: map[string]int{"active_targets_count": 0},
}
select {
case sendChan <- []network.RealTimePullTarget{specialTarget}:
logger.Info(ctx, "sent 2101 status request to sendChan")
case sendChan <- resp:
default:
logger.Warn(ctx, "sendChan is full, skipping 2101 status message")
logger.Warn(ctx, "sendChan is full, skipping 2101 status")
}
}
@ -423,7 +401,6 @@ func stopAllPolling(ctx context.Context, stopChanMap map[string]chan struct{}) {
close(stopChan)
}
clear(stopChanMap)
return
}
// redisPollingConfig define struct for param which query real time data from redis
@ -463,7 +440,7 @@ func realTimeDataQueryFromRedis(ctx context.Context, config redisPollingConfig,
}
func performQuery(ctx context.Context, client *diagram.RedisClient, config redisPollingConfig, fanInChan chan network.RealTimePullTarget) {
members, err := client.QueryByZRangeByLex(ctx, config.queryKey, config.dataSize)
members, err := client.QueryByZRange(ctx, config.queryKey, config.dataSize)
if err != nil {
logger.Error(ctx, "query real time data from redis failed", "key", config.queryKey, "error", err)
return

View File

@ -168,7 +168,6 @@ func receiveRealTimeDataByWebSocket(ctx context.Context, params url.Values, tran
}
transportChannel <- subPoss
}
return
}
// messageTypeToString define func of auxiliary to convert message type to string

View File

@ -5,9 +5,9 @@ import (
"context"
"fmt"
"maps"
"net/http"
"sync"
"modelRT/common"
"modelRT/constants"
"modelRT/database"
"modelRT/logger"
@ -33,42 +33,42 @@ func init() {
// @Accept json
// @Produce json
// @Param request body network.RealTimeSubRequest true "量测节点实时数据订阅"
// @Success 200 {object} network.SuccessResponse{payload=network.RealTimeSubPayload} "订阅实时数据结果列表"
// @Success 2000 {object} network.SuccessResponse{payload=network.RealTimeSubPayload} "订阅实时数据结果列表"
//
// @Example 200 {
// "code": 200,
// "msg": "success",
// @Example 2000 {
// "code": 2000,
// "msg": "process completed",
// "payload": {
// "targets": [
// {
// "id": "grid1.zone1.station1.ns1.tag1.bay.I11_C_rms",
// "code": "1001",
// "code": "20000",
// "msg": "subscription success"
// },
// {
// "id": "grid1.zone1.station1.ns1.tag1.bay.I11_B_rms",
// "code": "1002",
// "code": "20000",
// "msg": "subscription failed"
// }
// ]
// }
// }
//
// @Failure 400 {object} network.FailureResponse{payload=network.RealTimeSubPayload} "订阅实时数据结果列表"
// @Failure 3000 {object} network.FailureResponse{payload=network.RealTimeSubPayload} "订阅实时数据结果列表"
//
// @Example 400 {
// "code": 400,
// "msg": "failed to get recommend data from redis",
// @Example 3000 {
// "code": 3000,
// "msg": "process completed with partial failures",
// "payload": {
// "targets": [
// {
// "id": "grid1.zone1.station1.ns1.tag1.bay.I11_A_rms",
// "code": "1002",
// "code": "40005",
// "msg": "subscription failed"
// },
// {
// "id": "grid1.zone1.station1.ns1.tag1.bay.I11_B_rms",
// "code": "1002",
// "code": "50001",
// "msg": "subscription failed"
// }
// ]
@ -83,10 +83,7 @@ func RealTimeSubHandler(c *gin.Context) {
if err := c.ShouldBindJSON(&request); err != nil {
logger.Error(c, "failed to unmarshal real time query request", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: err.Error(),
})
renderRespFailure(c, constants.RespCodeInvalidParams, err.Error(), nil)
return
}
@ -95,10 +92,7 @@ func RealTimeSubHandler(c *gin.Context) {
id, err := uuid.NewV4()
if err != nil {
logger.Error(c, "failed to generate client id", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: err.Error(),
})
renderRespFailure(c, constants.RespCodeInvalidParams, err.Error(), nil)
return
}
clientID = id.String()
@ -123,110 +117,74 @@ func RealTimeSubHandler(c *gin.Context) {
results, err := globalSubState.CreateConfig(c, tx, clientID, request.Measurements)
if err != nil {
logger.Error(c, "create real time data subscription config failed", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: err.Error(),
Payload: network.RealTimeSubPayload{
renderRespFailure(c, constants.RespCodeFailed, err.Error(), network.RealTimeSubPayload{
ClientID: clientID,
TargetResults: results,
},
})
return
}
c.JSON(http.StatusOK, network.SuccessResponse{
Code: http.StatusOK,
Msg: "success",
Payload: network.RealTimeSubPayload{
renderRespSuccess(c, constants.RespCodeSuccess, "process completed", network.RealTimeSubPayload{
ClientID: clientID,
TargetResults: results,
},
})
return
case constants.SubStopAction:
results, err := globalSubState.RemoveTargets(c, clientID, request.Measurements)
if err != nil {
logger.Error(c, "remove target to real time data subscription config failed", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: err.Error(),
Payload: network.RealTimeSubPayload{
renderRespFailure(c, constants.RespCodeFailed, err.Error(), network.RealTimeSubPayload{
ClientID: clientID,
TargetResults: results,
},
})
return
}
c.JSON(http.StatusOK, network.SuccessResponse{
Code: http.StatusOK,
Msg: "success",
Payload: network.RealTimeSubPayload{
renderRespSuccess(c, constants.RespCodeSuccess, "success", network.RealTimeSubPayload{
ClientID: clientID,
TargetResults: results,
},
})
return
case constants.SubAppendAction:
results, err := globalSubState.AppendTargets(c, tx, clientID, request.Measurements)
if err != nil {
logger.Error(c, "append target to real time data subscription config failed", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: err.Error(),
Payload: network.RealTimeSubPayload{
renderRespFailure(c, constants.RespCodeFailed, err.Error(), network.RealTimeSubPayload{
ClientID: clientID,
TargetResults: results,
},
})
return
}
c.JSON(http.StatusOK, network.SuccessResponse{
Code: http.StatusOK,
Msg: "success",
Payload: network.RealTimeSubPayload{
renderRespSuccess(c, constants.RespCodeSuccess, "success", network.RealTimeSubPayload{
ClientID: clientID,
TargetResults: results,
},
})
return
case constants.SubUpdateAction:
results, err := globalSubState.UpdateTargets(c, tx, clientID, request.Measurements)
if err != nil {
logger.Error(c, "update target to real time data subscription config failed", "error", err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: err.Error(),
Payload: network.RealTimeSubPayload{
renderRespFailure(c, constants.RespCodeFailed, err.Error(), network.RealTimeSubPayload{
ClientID: clientID,
TargetResults: results,
},
})
return
}
c.JSON(http.StatusOK, network.SuccessResponse{
Code: http.StatusOK,
Msg: "success",
Payload: network.RealTimeSubPayload{
renderRespSuccess(c, constants.RespCodeSuccess, "success", network.RealTimeSubPayload{
ClientID: clientID,
TargetResults: results,
},
})
return
default:
err := fmt.Errorf("%w: request action is %s", constants.ErrUnsupportedSubAction, request.Action)
err := fmt.Errorf("%w: request action is %s", common.ErrUnsupportedSubAction, request.Action)
logger.Error(c, "unsupported action of real time data subscription request", "error", err)
requestTargetsCount := processRealTimeRequestCount(request.Measurements)
results := processRealTimeRequestTargets(request.Measurements, requestTargetsCount, err)
c.JSON(http.StatusOK, network.FailureResponse{
Code: http.StatusBadRequest,
Msg: err.Error(),
Payload: network.RealTimeSubPayload{
results := processRealTimeRequestTargets(request.Measurements, requestTargetsCount, constants.CodeUnsupportSubOperation, err)
renderRespFailure(c, constants.RespCodeInvalidParams, err.Error(), network.RealTimeSubPayload{
ClientID: clientID,
TargetResults: results,
},
})
return
}
@ -283,12 +241,12 @@ func processAndValidateTargetsForStart(ctx context.Context, tx *gorm.DB, measure
targetModel, err := database.ParseDataIdentifierToken(ctx, tx, target)
if err != nil {
logger.Error(ctx, "parse data indentity token failed", "error", err, "identity_token", target)
targetResult.Code = constants.SubFailedCode
targetResult.Code = constants.CodeFoundTargetFailed
targetResult.Msg = fmt.Sprintf("%s: %s", constants.SubFailedMsg, err.Error())
targetProcessResults = append(targetProcessResults, targetResult)
continue
}
targetResult.Code = constants.SubSuccessCode
targetResult.Code = constants.CodeSuccess
targetResult.Msg = constants.SubSuccessMsg
targetProcessResults = append(targetProcessResults, targetResult)
successfulTargets = append(successfulTargets, target)
@ -327,7 +285,7 @@ func processAndValidateTargetsForUpdate(ctx context.Context, tx *gorm.DB, config
if _, exist := config.targetContext[target]; !exist {
err := fmt.Errorf("target %s does not exists in subscription list", target)
logger.Error(ctx, "update target does not exist in subscription list", "error", err, "target", target)
targetResult.Code = constants.UpdateSubFailedCode
targetResult.Code = constants.CodeUpdateSubTargetMissing
targetResult.Msg = fmt.Sprintf("%s: %s", constants.UpdateSubFailedMsg, err.Error())
targetProcessResults = append(targetProcessResults, targetResult)
continue
@ -336,13 +294,13 @@ func processAndValidateTargetsForUpdate(ctx context.Context, tx *gorm.DB, config
targetModel, err := database.ParseDataIdentifierToken(ctx, tx, target)
if err != nil {
logger.Error(ctx, "parse data indentity token failed", "error", err, "identity_token", target)
targetResult.Code = constants.UpdateSubFailedCode
targetResult.Code = constants.CodeDBQueryFailed
targetResult.Msg = fmt.Sprintf("%s: %s", constants.UpdateSubFailedMsg, err.Error())
targetProcessResults = append(targetProcessResults, targetResult)
continue
}
targetResult.Code = constants.UpdateSubSuccessCode
targetResult.Code = constants.CodeSuccess
targetResult.Msg = constants.UpdateSubSuccessMsg
targetProcessResults = append(targetProcessResults, targetResult)
successfulTargets = append(successfulTargets, target)
@ -473,7 +431,7 @@ func (s *SharedSubState) AppendTargets(ctx context.Context, tx *gorm.DB, clientI
if !exist {
err := fmt.Errorf("clientID %s not found. use CreateConfig to start a new config", clientID)
logger.Error(ctx, "clientID not found. use CreateConfig to start a new config", "error", err)
return processRealTimeRequestTargets(measurements, requestTargetsCount, err), err
return processRealTimeRequestTargets(measurements, requestTargetsCount, constants.CodeAppendSubTargetMissing, err), err
}
targetProcessResults, successfulTargets, newMeasMap, newMeasContextMap := processAndValidateTargetsForStart(ctx, tx, measurements, requestTargetsCount)
@ -507,7 +465,7 @@ func filterAndDeduplicateRepeatTargets(resultsSlice []network.TargetResult, idsS
for index := range resultsSlice {
if _, isTarget := set[resultsSlice[index].ID]; isTarget {
resultsSlice[index].Code = constants.SubRepeatCode
resultsSlice[index].Code = constants.CodeSubTargetRepeat
resultsSlice[index].Msg = constants.SubRepeatMsg
}
}
@ -575,7 +533,7 @@ func (s *SharedSubState) RemoveTargets(ctx context.Context, clientID string, mea
s.globalMutex.RUnlock()
err := fmt.Errorf("clientID %s not found", clientID)
logger.Error(ctx, "clientID not found in remove targets operation", "error", err)
return processRealTimeRequestTargets(measurements, requestTargetsCount, err), err
return processRealTimeRequestTargets(measurements, requestTargetsCount, constants.CodeCancelSubTargetMissing, err), err
}
s.globalMutex.RUnlock()
@ -595,7 +553,7 @@ func (s *SharedSubState) RemoveTargets(ctx context.Context, clientID string, mea
for _, target := range measTargets {
targetResult := network.TargetResult{
ID: target,
Code: constants.CancelSubFailedCode,
Code: constants.CodeCancelSubTargetMissing,
Msg: constants.CancelSubFailedMsg,
}
targetProcessResults = append(targetProcessResults, targetResult)
@ -616,7 +574,7 @@ func (s *SharedSubState) RemoveTargets(ctx context.Context, clientID string, mea
transportTargets.Targets = append(transportTargets.Targets, existingTarget)
targetResult := network.TargetResult{
ID: existingTarget,
Code: constants.CancelSubSuccessCode,
Code: constants.CodeSuccess,
Msg: constants.CancelSubSuccessMsg,
}
targetProcessResults = append(targetProcessResults, targetResult)
@ -639,7 +597,7 @@ func (s *SharedSubState) RemoveTargets(ctx context.Context, clientID string, mea
for target := range targetsToRemoveMap {
targetResult := network.TargetResult{
ID: target,
Code: constants.CancelSubFailedCode,
Code: constants.CodeCancelSubTargetMissing,
Msg: fmt.Sprintf("%s: %s", constants.SubFailedMsg, err.Error()),
}
targetProcessResults = append(targetProcessResults, targetResult)
@ -663,17 +621,15 @@ func (s *SharedSubState) RemoveTargets(ctx context.Context, clientID string, mea
// UpdateTargets define function to update targets in SharedSubState
func (s *SharedSubState) UpdateTargets(ctx context.Context, tx *gorm.DB, clientID string, measurements []network.RealTimeMeasurementItem) ([]network.TargetResult, error) {
requestTargetsCount := processRealTimeRequestCount(measurements)
targetProcessResults := make([]network.TargetResult, 0, requestTargetsCount)
s.globalMutex.RLock()
config, exist := s.subMap[clientID]
s.globalMutex.RUnlock()
if !exist {
s.globalMutex.RUnlock()
err := fmt.Errorf("clientID %s not found", clientID)
logger.Error(ctx, "clientID not found in remove targets operation", "error", err)
return processRealTimeRequestTargets(measurements, requestTargetsCount, err), err
return processRealTimeRequestTargets(measurements, requestTargetsCount, constants.CodeUpdateSubTargetMissing, err), err
}
targetProcessResults, successfulTargets, newMeasMap, newMeasContextMap := processAndValidateTargetsForUpdate(ctx, tx, config, measurements, requestTargetsCount)
@ -722,13 +678,13 @@ func processRealTimeRequestCount(measurements []network.RealTimeMeasurementItem)
return totalTargetsCount
}
func processRealTimeRequestTargets(measurements []network.RealTimeMeasurementItem, targetCount int, err error) []network.TargetResult {
func processRealTimeRequestTargets(measurements []network.RealTimeMeasurementItem, targetCount int, businessCode int, err error) []network.TargetResult {
targetProcessResults := make([]network.TargetResult, 0, targetCount)
for _, measurementItem := range measurements {
for _, target := range measurementItem.Targets {
var targetResult network.TargetResult
targetResult.ID = target
targetResult.Code = constants.SubFailedCode
targetResult.Code = businessCode
targetResult.Msg = fmt.Sprintf("%s: %s", constants.SubFailedMsg, err.Error())
targetProcessResults = append(targetProcessResults, targetResult)
}

View File

@ -6,17 +6,21 @@ import (
"path"
"runtime"
"modelRT/constants"
"go.opentelemetry.io/otel/trace"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)
// Logger is the interface returned by New for structured, trace-aware logging.
type Logger interface {
Debug(msg string, kv ...any)
Info(msg string, kv ...any)
Warn(msg string, kv ...any)
Error(msg string, kv ...any)
}
type logger struct {
ctx context.Context
traceID string
spanID string
pSpanID string
_logger *zap.Logger
}
@ -48,7 +52,10 @@ func makeLogFields(ctx context.Context, kv ...any) []zap.Field {
kv = append(kv, "unknown")
}
kv = append(kv, "traceID", ctx.Value(constants.HeaderTraceID), "spanID", ctx.Value(constants.HeaderSpanID), "parentSpanID", ctx.Value(constants.HeaderParentSpanID))
spanCtx := trace.SpanFromContext(ctx).SpanContext()
traceID := spanCtx.TraceID().String()
spanID := spanCtx.SpanID().String()
kv = append(kv, "traceID", traceID, "spanID", spanID)
funcName, file, line := getLoggerCallerInfo()
kv = append(kv, "func", funcName, "file", file, "line", line)
@ -89,23 +96,11 @@ func getLoggerCallerInfo() (funcName, file string, line int) {
return
}
func New(ctx context.Context) *logger {
var traceID, spanID, pSpanID string
if ctx.Value("traceID") != nil {
traceID = ctx.Value("traceID").(string)
}
if ctx.Value("spanID") != nil {
spanID = ctx.Value("spanID").(string)
}
if ctx.Value("psapnID") != nil {
pSpanID = ctx.Value("pspanID").(string)
}
// New returns a logger bound to ctx. Trace fields (traceID, spanID) are extracted
// from the OTel span stored in ctx and included in every log entry.
func New(ctx context.Context) Logger {
return &logger{
ctx: ctx,
traceID: traceID,
spanID: spanID,
pSpanID: pSpanID,
_logger: GetLoggerInstance(),
}
}

96
main.go
View File

@ -12,17 +12,24 @@ import (
"os/signal"
"path/filepath"
"syscall"
"time"
"modelRT/alert"
"modelRT/config"
"modelRT/constants"
"modelRT/database"
"modelRT/diagram"
"modelRT/logger"
"modelRT/middleware"
"modelRT/model"
"modelRT/mq"
"modelRT/pool"
"modelRT/real-time-data/alert"
"modelRT/router"
"modelRT/task"
"modelRT/util"
"github.com/gin-contrib/cors"
locker "modelRT/distributedlock"
_ "modelRT/docs"
@ -32,6 +39,7 @@ import (
"github.com/panjf2000/ants/v2"
swaggerFiles "github.com/swaggo/files"
ginSwagger "github.com/swaggo/gin-swagger"
"go.opentelemetry.io/otel"
"gorm.io/gorm"
)
@ -64,9 +72,9 @@ var (
//
// @host localhost:8080
// @BasePath /api/v1
func main() {
flag.Parse()
ctx := context.TODO()
configPath := filepath.Join(*modelRTConfigDir, *modelRTConfigName+"."+*modelRTConfigType)
if _, err := os.Stat(configPath); os.IsNotExist(err) {
@ -92,13 +100,29 @@ func main() {
logger.InitLoggerInstance(modelRTConfig.LoggerConfig)
defer logger.GetLoggerInstance().Sync()
// init OTel TracerProvider
tp, tpErr := middleware.InitTracerProvider(context.Background(), modelRTConfig)
if tpErr != nil {
log.Printf("warn: OTLP tracer init failed, tracing disabled: %v", tpErr)
}
if tp != nil {
defer func() {
shutdownCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
tp.Shutdown(shutdownCtx)
}()
}
ctx, startupSpan := otel.Tracer("modelRT/main").Start(context.Background(), "startup")
defer startupSpan.End()
hostName, err := os.Hostname()
if err != nil {
logger.Error(ctx, "get host name failed", "error", err)
panic(err)
}
serviceToken, err := util.GenerateClientToken(hostName, modelRTConfig.ServiceConfig.ServiceName, modelRTConfig.ServiceConfig.SecretKey)
serviceToken, err := util.GenerateClientToken(hostName, modelRTConfig.ServiceName, modelRTConfig.SecretKey)
if err != nil {
logger.Error(ctx, "generate client token failed", "error", err)
panic(err)
@ -127,13 +151,17 @@ func main() {
defer parsePool.Release()
searchPool, err := util.NewRedigoPool(modelRTConfig.StorageRedisConfig)
if err != nil {
logger.Error(ctx, "init redigo pool failed", "error", err)
panic(err)
}
defer searchPool.Close()
model.InitAutocompleterWithPool(searchPool)
storageClient := diagram.InitRedisClientInstance(modelRTConfig.StorageRedisConfig)
storageClient := diagram.InitRedisClientInstance(modelRTConfig.StorageRedisConfig, modelRTConfig.DeployEnv)
defer storageClient.Close()
lockerClient := locker.InitClientInstance(modelRTConfig.LockerRedisConfig)
lockerClient := locker.InitClientInstance(modelRTConfig.LockerRedisConfig, modelRTConfig.DeployEnv)
defer lockerClient.Close()
// init anchor param ants pool
@ -144,7 +172,25 @@ func main() {
}
defer anchorRealTimePool.Release()
postgresDBClient.Transaction(func(tx *gorm.DB) error {
// init rabbitmq connection
mq.InitRabbitProxy(ctx, modelRTConfig.RabbitMQConfig)
// init async task worker
taskWorker, err := task.InitTaskWorker(ctx, modelRTConfig, postgresDBClient)
if err != nil {
logger.Error(ctx, "Failed to initialize task worker", "error", err)
// Continue without task worker, but log warning
} else {
go taskWorker.Start()
defer taskWorker.Stop()
}
// async push event to rabbitMQ
go mq.PushUpDownLimitEventToRabbitMQ(ctx, mq.MsgChan)
// async push task message to rabbitMQ
go task.PushTaskToRabbitMQ(ctx, modelRTConfig.RabbitMQConfig, task.TaskMsgChan)
postgresDBClient.WithContext(ctx).Transaction(func(tx *gorm.DB) error {
// load circuit diagram from postgres
// componentTypeMap, err := database.QueryCircuitDiagramComponentFromDB(cancelCtx, tx, parsePool)
// if err != nil {
@ -193,9 +239,9 @@ func main() {
logger.Error(ctx, "load topologic info from postgres failed", "error", err)
panic(err)
}
go realtimedata.StartRealTimeDataComputing(ctx, allMeasurement)
go realtimedata.StartComputingRealTimeDataLimit(ctx, allMeasurement)
tree, err := database.QueryTopologicFromDB(ctx, tx)
tree, _, err := database.QueryTopologicFromDB(ctx, tx)
if err != nil {
logger.Error(ctx, "load topologic info from postgres failed", "error", err)
panic(err)
@ -204,26 +250,28 @@ func main() {
return nil
})
// use release mode in productio
// gin.SetMode(gin.ReleaseMode)
// use release mode in production
if modelRTConfig.DeployEnv == constants.ProductionDeployMode {
gin.SetMode(gin.ReleaseMode)
}
engine := gin.New()
// 添加CORS中间件
engine.Use(cors.New(cors.Config{
AllowOrigins: []string{"*"}, // 或指定具体域名
AllowMethods: []string{"GET", "POST", "PUT", "DELETE", "OPTIONS"},
AllowHeaders: []string{"Origin", "Content-Type", "Authorization"},
ExposeHeaders: []string{"Content-Length"},
AllowCredentials: true,
MaxAge: 12 * time.Hour,
}))
router.RegisterRoutes(engine, serviceToken)
// Swagger UI
if modelRTConfig.DeployEnv != constants.ProductionDeployMode {
engine.GET("/swagger/*any", ginSwagger.WrapHandler(swaggerFiles.Handler))
// 注册 Swagger UI 路由
// docs.SwaggerInfo.BasePath = "/model"
// v1 := engine.Group("/api/v1")
// {
// eg := v1.Group("/example")
// {
// eg.GET("/helloworld", Helloworld)
// }
// }
}
server := http.Server{
Addr: modelRTConfig.ServiceConfig.ServiceAddr,
Addr: modelRTConfig.ServiceAddr,
Handler: engine,
}
@ -232,9 +280,12 @@ func main() {
signal.Notify(done, os.Interrupt, syscall.SIGINT, syscall.SIGTERM)
go func() {
<-done
logger.Info(ctx, "shutdown signal received, cleaning up...")
if err := server.Shutdown(context.Background()); err != nil {
logger.Error(ctx, "shutdown serverError", "err", err)
}
mq.CloseRabbitProxy()
logger.Info(ctx, "resources cleaned up, exiting")
}()
logger.Info(ctx, "starting ModelRT server")
@ -249,3 +300,4 @@ func main() {
}
}
}

View File

@ -0,0 +1,16 @@
// Package middleware define gin framework middlewares
package middleware
import (
"modelRT/config"
"github.com/gin-gonic/gin"
)
// ConfigMiddleware 将全局配置注入到Gin上下文中
func ConfigMiddleware(modelRTConfig config.ModelRTConfig) gin.HandlerFunc {
return func(c *gin.Context) {
c.Set("config", modelRTConfig)
c.Next()
}
}

View File

@ -1,3 +1,4 @@
// Package middleware define gin framework middlewares
package middleware
import (

View File

@ -1,3 +1,4 @@
// Package middleware define gin framework middlewares
package middleware
import (

View File

@ -1,3 +1,4 @@
// Package middleware define gin framework middlewares
package middleware
import "github.com/gin-gonic/gin"

View File

@ -1,32 +1,94 @@
// Package middleware defines gin framework middlewares and OTel tracing infrastructure.
package middleware
import (
"bytes"
"context"
"fmt"
"io"
"strings"
"time"
"modelRT/config"
"modelRT/constants"
"modelRT/logger"
"modelRT/util"
"github.com/gin-gonic/gin"
"go.opentelemetry.io/contrib/propagators/b3"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
"go.opentelemetry.io/otel/propagation"
sdkresource "go.opentelemetry.io/otel/sdk/resource"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
oteltrace "go.opentelemetry.io/otel/trace"
)
// StartTrace define func of set trace info from request header
func StartTrace() gin.HandlerFunc {
return func(c *gin.Context) {
traceID := c.Request.Header.Get(constants.HeaderTraceID)
parentSpanID := c.Request.Header.Get(constants.HeaderSpanID)
spanID := util.GenerateSpanID(c.Request.RemoteAddr)
// if traceId is empty, it means it is the origin of the link. Set it to the spanId of this time. The originating spanId is the root spanId.
if traceID == "" {
// traceId identifies the entire request link, and spanId identifies the different services in the link.
traceID = spanID
// InitTracerProvider creates an OTLP TracerProvider and registers it as the global provider.
// It also registers the B3 propagator to stay compatible with existing B3 infrastructure.
// The caller is responsible for calling Shutdown on the returned provider during graceful shutdown.
func InitTracerProvider(ctx context.Context, cfg config.ModelRTConfig) (*sdktrace.TracerProvider, error) {
opts := []otlptracehttp.Option{
otlptracehttp.WithEndpoint(cfg.OtelConfig.Endpoint),
}
c.Set(constants.HeaderTraceID, traceID)
c.Set(constants.HeaderSpanID, spanID)
c.Set(constants.HeaderParentSpanID, parentSpanID)
if cfg.OtelConfig.Insecure {
opts = append(opts, otlptracehttp.WithInsecure())
}
exporter, err := otlptracehttp.New(ctx, opts...)
if err != nil {
return nil, fmt.Errorf("create OTLP exporter: %w", err)
}
res := sdkresource.NewSchemaless(
attribute.String("service.name", cfg.ServiceName),
attribute.String("deployment.environment", cfg.DeployEnv),
)
tp := sdktrace.NewTracerProvider(
sdktrace.WithBatcher(exporter),
sdktrace.WithResource(res),
sdktrace.WithSampler(sdktrace.AlwaysSample()),
)
otel.SetTracerProvider(tp)
otel.SetTextMapPropagator(b3.New())
return tp, nil
}
// StartTrace extracts upstream B3 trace context from request headers and starts a server span.
// Typed context keys are also injected for backward compatibility with the existing logger
// until the logger is migrated to read from the OTel span context (Step 6).
func StartTrace() gin.HandlerFunc {
tracer := otel.Tracer("modelRT/http")
return func(c *gin.Context) {
// Extract upstream trace context from B3 headers (X-B3-TraceId etc.)
ctx := otel.GetTextMapPropagator().Extract(
c.Request.Context(),
propagation.HeaderCarrier(c.Request.Header),
)
spanName := c.FullPath()
if spanName == "" {
spanName = c.Request.URL.Path
}
ctx, span := tracer.Start(ctx, spanName,
oteltrace.WithSpanKind(oteltrace.SpanKindServer),
)
defer span.End()
// backward compat: inject typed keys so existing logger reads work until Step 6
spanCtx := span.SpanContext()
ctx = context.WithValue(ctx, constants.CtxKeyTraceID, spanCtx.TraceID().String())
ctx = context.WithValue(ctx, constants.CtxKeySpanID, spanCtx.SpanID().String())
c.Request = c.Request.WithContext(ctx)
// set in gin context for accessLog (logger.New(c) reads via gin.Context.Value)
c.Set(constants.HeaderTraceID, spanCtx.TraceID().String())
c.Set(constants.HeaderSpanID, spanCtx.SpanID().String())
c.Next()
}
}
@ -78,7 +140,6 @@ func LogAccess() gin.HandlerFunc {
accessLog(c, "access_end", time.Since(start), reqBody, responseLogging)
}()
c.Next()
return
}
}

View File

@ -7,6 +7,7 @@ import (
"strconv"
"strings"
"modelRT/common"
"modelRT/constants"
)
@ -61,7 +62,7 @@ func generateChannelName(prefix string, number int, suffix string) (string, erro
switch prefix {
case constants.ChannelPrefixTelemetry:
if number > 10 {
return "", constants.ErrExceedsLimitType
return "", common.ErrExceedsLimitType
}
var builder strings.Builder
numberStr := strconv.Itoa(number)
@ -86,7 +87,7 @@ func generateChannelName(prefix string, number int, suffix string) (string, erro
channelName := builder.String()
return channelName, nil
default:
return "", constants.ErrUnsupportedChannelPrefixType
return "", common.ErrUnsupportedChannelPrefixType
}
}
@ -164,14 +165,14 @@ func (m MeasurementDataSource) GetIOAddress() (IOAddress, error) {
if addr, ok := m.IOAddress.(CL3611Address); ok {
return addr, nil
}
return nil, constants.ErrInvalidAddressType
return nil, common.ErrInvalidAddressType
case constants.DataSourceTypePower104:
if addr, ok := m.IOAddress.(Power104Address); ok {
return addr, nil
}
return nil, constants.ErrInvalidAddressType
return nil, common.ErrInvalidAddressType
default:
return nil, constants.ErrUnknownDataType
return nil, common.ErrUnknownDataType
}
}

View File

@ -22,7 +22,7 @@ func GetNSpathToIsLocalMap(ctx context.Context, db *gorm.DB) (map[string]bool, e
var results []ComponentStationRelation
nspathMap := make(map[string]bool)
err := db.Table("component").
err := db.WithContext(ctx).Table("component").
Select("component.nspath, station.is_local").
Joins("join station on component.station_id = station.id").
Scan(&results).Error

View File

@ -550,7 +550,6 @@ func handleLevelFuzzySearch(ctx context.Context, rdb *redis.Client, hierarchy co
IsFuzzy: true,
Err: nil,
}
return
}
// runFuzzySearch define func to process redis fuzzy search

39
mq/event/event.go Normal file
View File

@ -0,0 +1,39 @@
// Package event define real time data evnet operation functions
package event
// EventRecord define struct for CIM event record
type EventRecord struct {
// 事件名称
EventName string `json:"event"`
// 事件唯一标识符
EventUUID string `json:"event_uuid"`
// 事件类型
Type int `json:"type"`
// 事件优先级 (0-9)
Priority int `json:"priority"`
// 事件状态
Status int `json:"status"`
// 可选模板参数
Category string `json:"category,omitempty"`
// 毫秒级时间戳 (Unix epoch)
Timestamp int64 `json:"timestamp"`
// 事件来源 (station, platform, msa)
From string `json:"from"`
// 事件场景描述对象 (如阈值、当前值)
Condition map[string]any `json:"condition"`
// 与事件相关的订阅信息
AttachedSubscriptions []any `json:"attached_subscriptions"`
// 事件分析结果对象
Result map[string]any `json:"result,omitempty"`
// 操作历史记录 (CIM ActivityRecord)
Operations []OperationRecord `json:"operations"`
// 子站告警原始数据 (CIM Alarm 数据)
Origin map[string]any `json:"origin,omitempty"`
}
// OperationRecord 描述对事件的操作记录,如确认(acknowledgment)等
type OperationRecord struct {
Action string `json:"action"` // 执行的动作,如 "acknowledgment"
Op string `json:"op"` // 操作人/操作账号标识
TS int64 `json:"ts"` // 操作发生的毫秒时间戳
}

View File

@ -0,0 +1,82 @@
// Package event define real time data evnet operation functions
package event
import (
"context"
"modelRT/common"
"modelRT/logger"
)
type actionHandler func(ctx context.Context, content string, ops ...EventOption) (*EventRecord, error)
// actionDispatchMap define variable to store all action handler into map
var actionDispatchMap = map[string]actionHandler{
"info": handleInfoAction,
"warning": handleWarningAction,
"error": handleErrorAction,
"critical": handleCriticalAction,
"exception": handleExceptionAction,
}
// TriggerEventAction define func to trigger event by action in compute config
func TriggerEventAction(ctx context.Context, command string, eventName string, ops ...EventOption) (*EventRecord, error) {
handler, exists := actionDispatchMap[command]
if !exists {
logger.Error(ctx, "unknown action command", "command", command)
return nil, common.ErrUnknowEventActionCommand
}
eventRecord, err := handler(ctx, eventName, ops...)
if err != nil {
logger.Error(ctx, "action event handler failed", "error", err)
return nil, common.ErrExecEventActionFailed
}
return eventRecord, nil
}
func handleInfoAction(ctx context.Context, eventName string, ops ...EventOption) (*EventRecord, error) {
logger.Info(ctx, "trigger info event", "event_name", eventName)
eventRecord, err := NewGeneralPlatformSoftRecord(eventName, ops...)
if err != nil {
logger.Error(ctx, "generate info event record failed", "error", err)
return nil, err
}
return eventRecord, nil
}
func handleWarningAction(ctx context.Context, eventName string, ops ...EventOption) (*EventRecord, error) {
logger.Info(ctx, "trigger warning event", "event_name", eventName)
eventRecord, err := NewWarnPlatformSoftRecord(eventName, ops...)
if err != nil {
logger.Error(ctx, "generate warning event record failed", "error", err)
return nil, err
}
return eventRecord, nil
}
func handleErrorAction(ctx context.Context, eventName string, ops ...EventOption) (*EventRecord, error) {
logger.Info(ctx, "trigger error event", "event_name", eventName)
eventRecord, err := NewCriticalPlatformSoftRecord(eventName, ops...)
if err != nil {
logger.Error(ctx, "generate error event record failed", "error", err)
return nil, err
}
return eventRecord, nil
}
func handleCriticalAction(ctx context.Context, content string, ops ...EventOption) (*EventRecord, error) {
// 实际执行发送警告、记录日志等操作
actionParams := content
// ... logic to send critical level event using actionParams ...
logger.Warn(ctx, "trigger critical event", "message", actionParams)
return nil, nil
}
func handleExceptionAction(ctx context.Context, content string, ops ...EventOption) (*EventRecord, error) {
// 实际执行发送警告、记录日志等操作
actionParams := content
// ... logic to send except level event using actionParams ...
logger.Warn(ctx, "trigger except event", "message", actionParams)
return nil, nil
}

85
mq/event/event_options.go Normal file
View File

@ -0,0 +1,85 @@
// Package event define real time data evnet operation functions
package event
import (
"maps"
"strings"
)
// EventOption define option function type for event record creation
type EventOption func(*EventRecord)
// WithCondition define option function to set event condition description
func WithCondition(cond map[string]any) EventOption {
return func(e *EventRecord) {
if cond != nil {
e.Condition = cond
}
}
}
// WithSubscriptions define option function to set event attached subscription information
func WithSubscriptions(subs []any) EventOption {
return func(e *EventRecord) {
if subs != nil {
e.AttachedSubscriptions = subs
}
}
}
// WithOperations define option function to set event operation records
func WithOperations(ops []OperationRecord) EventOption {
return func(e *EventRecord) {
if ops != nil {
e.Operations = ops
}
}
}
// WithCategory define option function to set event category
func WithCategory(cat string) EventOption {
return func(e *EventRecord) {
e.Category = cat
}
}
// WithResult define option function to set event analysis result
func WithResult(result map[string]any) EventOption {
return func(e *EventRecord) {
e.Result = result
}
}
func WithTEAnalysisResult(breachType string) EventOption {
return func(e *EventRecord) {
if e.Result == nil {
e.Result = make(map[string]any)
}
description := "数据异常"
switch strings.ToLower(breachType) {
case "upup":
description = "超越上上限"
case "up":
description = "超越上限"
case "down":
description = "超越下限"
case "downdown":
description = "超越下下限"
}
e.Result["analysis_desc"] = description
e.Result["breach_type"] = breachType
}
}
// WithConditionValue define option function to set event condition with real time value and extra data
func WithConditionValue(realTimeValue []float64, extraData map[string]any) EventOption {
return func(e *EventRecord) {
if e.Condition == nil {
e.Condition = make(map[string]any)
}
e.Condition["real_time_value"] = realTimeValue
maps.Copy(e.Condition, extraData)
}
}

68
mq/event/gen_event.go Normal file
View File

@ -0,0 +1,68 @@
// Package event define real time data evnet operation functions
package event
import (
"fmt"
"time"
"modelRT/constants"
"github.com/gofrs/uuid"
)
// NewPlatformEventRecord define func to create a new platform event record with common fields initialized
func NewPlatformEventRecord(eventType int, priority int, eventName string, opts ...EventOption) (*EventRecord, error) {
u, err := uuid.NewV4()
if err != nil {
return nil, fmt.Errorf("failed to generate UUID: %w", err)
}
record := &EventRecord{
EventName: eventName,
EventUUID: u.String(),
Type: eventType,
Priority: priority,
Status: 1,
From: constants.EventFromPlatform,
Timestamp: time.Now().UnixNano() / int64(time.Millisecond),
Condition: make(map[string]any),
AttachedSubscriptions: make([]any, 0),
Operations: make([]OperationRecord, 0),
}
for _, opt := range opts {
opt(record)
}
return record, nil
}
// NewGeneralPlatformSoftRecord define func to create a new general platform software event record
func NewGeneralPlatformSoftRecord(name string, opts ...EventOption) (*EventRecord, error) {
return NewPlatformEventRecord(int(constants.EventGeneralPlatformSoft), 0, name, opts...)
}
// NewGeneralApplicationSoftRecord define func to create a new general application software event record
func NewGeneralApplicationSoftRecord(name string, opts ...EventOption) (*EventRecord, error) {
return NewPlatformEventRecord(int(constants.EventGeneralApplicationSoft), 0, name, opts...)
}
// NewWarnPlatformSoftRecord define func to create a new warning platform software event record
func NewWarnPlatformSoftRecord(name string, opts ...EventOption) (*EventRecord, error) {
return NewPlatformEventRecord(int(constants.EventWarnPlatformSoft), 3, name, opts...)
}
// NewWarnApplicationSoftRecord define func to create a new warning application software event record
func NewWarnApplicationSoftRecord(name string, opts ...EventOption) (*EventRecord, error) {
return NewPlatformEventRecord(int(constants.EventWarnApplicationSoft), 3, name, opts...)
}
// NewCriticalPlatformSoftRecord define func to create a new critical platform software event record
func NewCriticalPlatformSoftRecord(name string, opts ...EventOption) (*EventRecord, error) {
return NewPlatformEventRecord(int(constants.EventCriticalPlatformSoft), 6, name, opts...)
}
// NewCriticalApplicationSoftRecord define func to create a new critical application software event record
func NewCriticalApplicationSoftRecord(name string, opts ...EventOption) (*EventRecord, error) {
return NewPlatformEventRecord(int(constants.EventCriticalApplicationSoft), 6, name, opts...)
}

View File

@ -0,0 +1,146 @@
// Package mq provides read or write access to message queue services
package mq
import (
"context"
"encoding/json"
"time"
"modelRT/constants"
"modelRT/logger"
"modelRT/mq/event"
amqp "github.com/rabbitmq/amqp091-go"
)
// MsgChan define variable of channel to store messages that need to be sent to rabbitMQ
var MsgChan chan *event.EventRecord
func init() {
MsgChan = make(chan *event.EventRecord, 10000)
}
func initUpDownLimitEventChannel(ctx context.Context) (*amqp.Channel, error) {
var channel *amqp.Channel
var err error
channel, err = GetConn().Channel()
if err != nil {
logger.Error(ctx, "open rabbitMQ server channel failed", "error", err)
return nil, err
}
err = channel.ExchangeDeclare(constants.EventDeadExchangeName, "topic", true, false, false, false, nil)
if err != nil {
logger.Error(ctx, "declare event dead letter exchange failed", "error", err)
return nil, err
}
_, err = channel.QueueDeclare(constants.EventUpDownDeadQueueName, true, false, false, false, nil)
if err != nil {
logger.Error(ctx, "declare event dead letter queue failed", "error", err)
return nil, err
}
err = channel.QueueBind(constants.EventUpDownDeadQueueName, "#", constants.EventDeadExchangeName, false, nil)
if err != nil {
logger.Error(ctx, "bind event dead letter queue with routing key and exchange failed", "error", err)
return nil, err
}
err = channel.ExchangeDeclare(constants.EventExchangeName, "topic", true, false, false, false, nil)
if err != nil {
logger.Error(ctx, "declare event exchange failed", "error", err)
return nil, err
}
args := amqp.Table{
"x-max-length": int32(50),
"x-dead-letter-exchange": constants.EventDeadExchangeName,
"x-dead-letter-routing-key": constants.EventUpDownDeadRoutingKey,
}
_, err = channel.QueueDeclare(constants.EventUpDownQueueName, true, false, false, false, args)
if err != nil {
logger.Error(ctx, "declare event queue failed", "error", err)
return nil, err
}
err = channel.QueueBind(constants.EventUpDownQueueName, constants.EventUpDownRoutingKey, constants.EventExchangeName, false, nil)
if err != nil {
logger.Error(ctx, "bind event queue with routing key and exchange failed", "error", err)
return nil, err
}
if err := channel.Confirm(false); err != nil {
logger.Error(ctx, "channel could not be put into confirm mode", "error", err)
return nil, err
}
return channel, nil
}
// PushUpDownLimitEventToRabbitMQ define func to push up and down limit event message to rabbitMQ
func PushUpDownLimitEventToRabbitMQ(ctx context.Context, msgChan chan *event.EventRecord) {
channel, err := initUpDownLimitEventChannel(ctx)
if err != nil {
logger.Error(ctx, "initializing rabbitMQ channel failed", "error", err)
return
}
// TODO 使用配置修改确认模式通道参数
confirms := channel.NotifyPublish(make(chan amqp.Confirmation, 100))
go func() {
for {
select {
case confirm, ok := <-confirms:
if !ok {
return
}
if !confirm.Ack {
logger.Error(ctx, "publish message failed (rejected by rabbitMQ)", "tag", confirm.DeliveryTag)
}
case <-ctx.Done():
return
}
}
}()
for {
select {
case <-ctx.Done():
logger.Info(ctx, "push event alarm message to rabbitMQ stopped by context cancel")
channel.Close()
return
case eventRecord, ok := <-msgChan:
if !ok {
logger.Info(ctx, "push event alarm message to rabbitMQ stopped by msgChan closed, exiting push loop")
channel.Close()
return
}
// TODO 将消息的序列化移动到发送之前以便使用eventRecord的category来作为routing key
recordBytes, err := json.Marshal(eventRecord)
if err != nil {
logger.Error(ctx, "marshal event record failed", "event_uuid", eventRecord.EventUUID, "error", err)
continue
}
// send event alarm message to rabbitMQ queue
routingKey := eventRecord.Category
pubCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
err = channel.PublishWithContext(pubCtx,
constants.EventExchangeName, // exchange
routingKey, // routing key
false, // mandatory
false, // immediate
amqp.Publishing{
ContentType: "text/plain",
Body: recordBytes,
})
cancel()
if err != nil {
logger.Error(ctx, "publish message to rabbitMQ queue failed", "message", recordBytes, "error", err)
}
}
}
}

217
mq/rabbitmq_init.go Normal file
View File

@ -0,0 +1,217 @@
// Package mq define message queue operation functions
package mq
import (
"context"
"crypto/tls"
"crypto/x509"
"encoding/pem"
"fmt"
"os"
"sync"
"time"
"modelRT/config"
"modelRT/logger"
amqp "github.com/rabbitmq/amqp091-go"
"github.com/youmark/pkcs8"
)
var (
_globalRabbitMQProxy *RabbitMQProxy
rabbitMQOnce sync.Once
)
// RabbitMQProxy define stuct of rabbitMQ connection proxy
type RabbitMQProxy struct {
tlsConf *tls.Config
conn *amqp.Connection
cancel context.CancelFunc
mu sync.Mutex
}
// rabbitMQCertConf define stuct of rabbitMQ connection certificates config
type rabbitMQCertConf struct {
serverName string
insecureSkipVerify bool
clientCert tls.Certificate
caCertPool *x509.CertPool
}
// GetConn define func to return the rabbitMQ connection
func GetConn() *amqp.Connection {
_globalRabbitMQProxy.mu.Lock()
defer _globalRabbitMQProxy.mu.Unlock()
return _globalRabbitMQProxy.conn
}
// InitRabbitProxy return instance of rabbitMQ connection
func InitRabbitProxy(ctx context.Context, rCfg config.RabbitMQConfig) *RabbitMQProxy {
amqpURI := generateRabbitMQURI(rCfg)
tlsConf, err := initCertConf(rCfg)
if err != nil {
logger.Error(ctx, "init rabbitMQ cert config failed", "error", err)
panic(err)
}
rabbitMQOnce.Do(func() {
cancelCtx, cancel := context.WithCancel(ctx)
conn := initRabbitMQ(ctx, amqpURI, tlsConf)
_globalRabbitMQProxy = &RabbitMQProxy{tlsConf: tlsConf, conn: conn, cancel: cancel}
go _globalRabbitMQProxy.handleReconnect(cancelCtx, amqpURI)
})
return _globalRabbitMQProxy
}
// initRabbitMQ return instance of rabbitMQ connection
func initRabbitMQ(ctx context.Context, rabbitMQURI string, tlsConf *tls.Config) *amqp.Connection {
logger.Info(ctx, "connecting to rabbitMQ server", "rabbitmq_uri", rabbitMQURI)
conn, err := amqp.DialConfig(rabbitMQURI, amqp.Config{
TLSClientConfig: tlsConf,
SASL: []amqp.Authentication{&amqp.ExternalAuth{}},
Heartbeat: 10 * time.Second,
})
if err != nil {
logger.Error(ctx, "init rabbitMQ connection failed", "error", err)
panic(err)
}
return conn
}
func (p *RabbitMQProxy) handleReconnect(ctx context.Context, rabbitMQURI string) {
for {
closeChan := make(chan *amqp.Error)
GetConn().NotifyClose(closeChan)
select {
case <-ctx.Done():
logger.Info(ctx, "context cancelled, exiting handleReconnect")
return
case err, ok := <-closeChan:
if !ok {
logger.Info(ctx, "rabbitMQ notify channel closed")
return
}
if err == nil {
logger.Info(ctx, "rabbitMQ connection closed normally, no need to reconnect")
return
}
logger.Warn(ctx, "rabbitMQ connection closed by error, starting reconnect", "reason", err)
}
if !p.reconnect(ctx, rabbitMQURI) {
return
}
}
}
func (p *RabbitMQProxy) reconnect(ctx context.Context, rabbitMQURI string) bool {
for {
logger.Info(ctx, "attempting to reconnect to rabbitMQ...")
select {
case <-ctx.Done():
return false
case <-time.After(5 * time.Second):
}
newConn, err := amqp.DialConfig(rabbitMQURI, amqp.Config{
TLSClientConfig: p.tlsConf,
SASL: []amqp.Authentication{&amqp.ExternalAuth{}},
Heartbeat: 10 * time.Second,
})
if err == nil {
p.mu.Lock()
p.conn = newConn
p.mu.Unlock()
logger.Info(ctx, "rabbitMQ reconnected successfully")
return true
}
logger.Error(ctx, "rabbitMQ reconnect failed, will retry", "err", err)
}
}
// CloseRabbitProxy close the rabbitMQ connection and stop reconnect goroutine
func CloseRabbitProxy() {
if _globalRabbitMQProxy != nil {
_globalRabbitMQProxy.cancel()
_globalRabbitMQProxy.mu.Lock()
if _globalRabbitMQProxy.conn != nil {
_globalRabbitMQProxy.conn.Close()
}
_globalRabbitMQProxy.mu.Unlock()
}
}
func generateRabbitMQURI(rCfg config.RabbitMQConfig) string {
// TODO 考虑拆分用户名密码配置项,兼容不同认证方式
// user := url.QueryEscape(rCfg.User)
// password := url.QueryEscape(rCfg.Password)
// amqpURI := fmt.Sprintf("amqps://%s:%s@%s:%d/",
// user,
// password,
// rCfg.Host,
// rCfg.Port,
// )
amqpURI := fmt.Sprintf("amqps://%s:%d/",
rCfg.Host,
rCfg.Port,
)
return amqpURI
}
func initCertConf(rCfg config.RabbitMQConfig) (*tls.Config, error) {
tlsConf := &tls.Config{
InsecureSkipVerify: rCfg.InsecureSkipVerify,
ServerName: rCfg.ServerName,
}
caCert, err := os.ReadFile(rCfg.CACertPath)
if err != nil {
return nil, fmt.Errorf("read server ca file failed: %w", err)
}
caCertPool := x509.NewCertPool()
if ok := caCertPool.AppendCertsFromPEM(caCert); !ok {
return nil, fmt.Errorf("failed to parse root certificate from %s", rCfg.CACertPath)
}
tlsConf.RootCAs = caCertPool
certPEM, err := os.ReadFile(rCfg.ClientCertPath)
if err != nil {
return nil, fmt.Errorf("read client cert file failed: %w", err)
}
keyData, err := os.ReadFile(rCfg.ClientKeyPath)
if err != nil {
return nil, fmt.Errorf("read private key file failed: %w", err)
}
block, _ := pem.Decode(keyData)
if block == nil {
return nil, fmt.Errorf("failed to decode PEM block from private key")
}
der, err := pkcs8.ParsePKCS8PrivateKey(block.Bytes, []byte(rCfg.ClientKeyPassword))
if err != nil {
return nil, fmt.Errorf("parse password-protected private key failed: %w", err)
}
privBytes, err := x509.MarshalPKCS8PrivateKey(der)
if err != nil {
return nil, fmt.Errorf("marshal private key failed: %w", err)
}
keyPEM := pem.EncodeToMemory(&pem.Block{Type: "PRIVATE KEY", Bytes: privBytes})
clientCert, err := tls.X509KeyPair(certPEM, keyPEM)
if err != nil {
return nil, fmt.Errorf("create x509 key pair failed: %w", err)
}
tlsConf.Certificates = []tls.Certificate{clientCert}
return tlsConf, nil
}

View File

@ -0,0 +1,96 @@
// Package network define struct of network operation
package network
import (
"time"
"github.com/gofrs/uuid"
)
// AsyncTaskCreateRequest defines the request structure for creating an asynchronous task
type AsyncTaskCreateRequest struct {
// required: true
// enum: TOPOLOGY_ANALYSIS, PERFORMANCE_ANALYSIS, EVENT_ANALYSIS, BATCH_IMPORT
TaskType string `json:"task_type" example:"TOPOLOGY_ANALYSIS" description:"异步任务类型"`
// required: true
Params map[string]interface{} `json:"params" swaggertype:"object" description:"任务参数,根据任务类型不同而不同"`
}
// AsyncTaskCreateResponse defines the response structure for creating an asynchronous task
type AsyncTaskCreateResponse struct {
TaskID uuid.UUID `json:"task_id" example:"123e4567-e89b-12d3-a456-426614174000" description:"任务唯一标识符"`
}
// AsyncTaskResultQueryRequest defines the request structure for querying task results
type AsyncTaskResultQueryRequest struct {
// required: true
TaskIDs []uuid.UUID `json:"task_ids" swaggertype:"array,string" example:"[\"123e4567-e89b-12d3-a456-426614174000\",\"223e4567-e89b-12d3-a456-426614174001\"]" description:"任务ID列表"`
}
// AsyncTaskResult defines the structure for a single task result
type AsyncTaskResult struct {
TaskID uuid.UUID `json:"task_id" example:"123e4567-e89b-12d3-a456-426614174000" description:"任务唯一标识符"`
TaskType string `json:"task_type" example:"TOPOLOGY_ANALYSIS" description:"任务类型"`
Status string `json:"status" example:"COMPLETED" description:"任务状态SUBMITTED, RUNNING, COMPLETED, FAILED"`
Progress *int `json:"progress,omitempty" example:"65" description:"任务进度(0-100)仅当状态为RUNNING时返回"`
CreatedAt int64 `json:"created_at" example:"1741846200" description:"任务创建时间戳"`
FinishedAt *int64 `json:"finished_at,omitempty" example:"1741846205" description:"任务完成时间戳仅当状态为COMPLETED或FAILED时返回"`
Result map[string]interface{} `json:"result,omitempty" swaggertype:"object" description:"任务结果仅当状态为COMPLETED时返回"`
ErrorCode *int `json:"error_code,omitempty" example:"400102" description:"错误码仅当状态为FAILED时返回"`
ErrorMessage *string `json:"error_message,omitempty" example:"Component UUID not found" description:"错误信息仅当状态为FAILED时返回"`
ErrorDetail map[string]interface{} `json:"error_detail,omitempty" swaggertype:"object" description:"错误详情仅当状态为FAILED时返回"`
}
// AsyncTaskResultQueryResponse defines the response structure for querying task results
type AsyncTaskResultQueryResponse struct {
Total int `json:"total" example:"3" description:"查询的任务总数"`
Tasks []AsyncTaskResult `json:"tasks" description:"任务结果列表"`
}
// AsyncTaskProgressUpdate defines the structure for task progress update
type AsyncTaskProgressUpdate struct {
TaskID uuid.UUID `json:"task_id" example:"123e4567-e89b-12d3-a456-426614174000" description:"任务唯一标识符"`
Progress int `json:"progress" example:"50" description:"任务进度(0-100)"`
}
// AsyncTaskStatusUpdate defines the structure for task status update
type AsyncTaskStatusUpdate struct {
TaskID uuid.UUID `json:"task_id" example:"123e4567-e89b-12d3-a456-426614174000" description:"任务唯一标识符"`
Status string `json:"status" example:"RUNNING" description:"任务状态SUBMITTED, RUNNING, COMPLETED, FAILED"`
Timestamp int64 `json:"timestamp" example:"1741846205" description:"状态更新时间戳"`
}
// TopologyAnalysisParams defines the parameters for topology analysis task
type TopologyAnalysisParams struct {
StartComponentUUID string `json:"start_component_uuid" example:"550e8400-e29b-41d4-a716-446655440000" description:"起始元件UUID"`
EndComponentUUID string `json:"end_component_uuid" example:"550e8400-e29b-41d4-a716-446655440001" description:"目标元件UUID"`
CheckInService bool `json:"check_in_service" example:"true" description:"是否检查路径上元件的投运状态默认为true"`
}
// PerformanceAnalysisParams defines the parameters for performance analysis task
type PerformanceAnalysisParams struct {
ComponentIDs []string `json:"component_ids" example:"[\"comp-001\",\"comp-002\"]" description:"需要分析的元件ID列表"`
TimeRange struct {
Start time.Time `json:"start" example:"2026-03-01T00:00:00Z" description:"分析开始时间"`
End time.Time `json:"end" example:"2026-03-02T00:00:00Z" description:"分析结束时间"`
} `json:"time_range" description:"分析时间范围"`
}
// EventAnalysisParams defines the parameters for event analysis task
type EventAnalysisParams struct {
EventType string `json:"event_type" example:"MOTOR_START" description:"事件类型"`
StartTime time.Time `json:"start_time" example:"2026-03-01T00:00:00Z" description:"事件开始时间"`
EndTime time.Time `json:"end_time" example:"2026-03-02T00:00:00Z" description:"事件结束时间"`
Components []string `json:"components,omitempty" example:"[\"comp-001\",\"comp-002\"]" description:"关联的元件列表"`
}
// BatchImportParams defines the parameters for batch import task
type BatchImportParams struct {
FilePath string `json:"file_path" example:"/data/import/model.csv" description:"导入文件路径"`
FileType string `json:"file_type" example:"CSV" description:"文件类型CSV, JSON, XML"`
Options struct {
Overwrite bool `json:"overwrite" example:"false" description:"是否覆盖现有数据"`
Validate bool `json:"validate" example:"true" description:"是否进行数据验证"`
NotifyUser bool `json:"notify_user" example:"true" description:"是否通知用户"`
} `json:"options" description:"导入选项"`
}

View File

@ -5,6 +5,7 @@ import (
"fmt"
"time"
"modelRT/common"
"modelRT/common/errcode"
"modelRT/constants"
"modelRT/orm"
@ -64,10 +65,10 @@ func ParseUUID(info TopologicChangeInfo) (TopologicUUIDChangeInfos, error) {
switch info.ChangeType {
case constants.UUIDFromChangeType:
if info.NewUUIDFrom == info.OldUUIDFrom {
return UUIDChangeInfo, fmt.Errorf("topologic change data check failed:%w", constants.ErrUUIDFromCheckT1)
return UUIDChangeInfo, fmt.Errorf("topologic change data check failed:%w", common.ErrUUIDFromCheckT1)
}
if info.NewUUIDTo != info.OldUUIDTo {
return UUIDChangeInfo, fmt.Errorf("topologic change data check failed:%w", constants.ErrUUIDToCheckT1)
return UUIDChangeInfo, fmt.Errorf("topologic change data check failed:%w", common.ErrUUIDToCheckT1)
}
oldUUIDFrom, err := uuid.FromString(info.OldUUIDFrom)
@ -90,10 +91,10 @@ func ParseUUID(info TopologicChangeInfo) (TopologicUUIDChangeInfos, error) {
UUIDChangeInfo.NewUUIDTo = OldUUIDTo
case constants.UUIDToChangeType:
if info.NewUUIDFrom != info.OldUUIDFrom {
return UUIDChangeInfo, fmt.Errorf("topologic change data check failed:%w", constants.ErrUUIDFromCheckT2)
return UUIDChangeInfo, fmt.Errorf("topologic change data check failed:%w", common.ErrUUIDFromCheckT2)
}
if info.NewUUIDTo == info.OldUUIDTo {
return UUIDChangeInfo, fmt.Errorf("topologic change data check failed:%w", constants.ErrUUIDToCheckT2)
return UUIDChangeInfo, fmt.Errorf("topologic change data check failed:%w", common.ErrUUIDToCheckT2)
}
oldUUIDFrom, err := uuid.FromString(info.OldUUIDFrom)
@ -116,10 +117,10 @@ func ParseUUID(info TopologicChangeInfo) (TopologicUUIDChangeInfos, error) {
UUIDChangeInfo.NewUUIDTo = newUUIDTo
case constants.UUIDAddChangeType:
if info.OldUUIDFrom != "" {
return UUIDChangeInfo, fmt.Errorf("topologic change data check failed:%w", constants.ErrUUIDFromCheckT3)
return UUIDChangeInfo, fmt.Errorf("topologic change data check failed:%w", common.ErrUUIDFromCheckT3)
}
if info.OldUUIDTo != "" {
return UUIDChangeInfo, fmt.Errorf("topologic change data check failed:%w", constants.ErrUUIDToCheckT3)
return UUIDChangeInfo, fmt.Errorf("topologic change data check failed:%w", common.ErrUUIDToCheckT3)
}
newUUIDFrom, err := uuid.FromString(info.NewUUIDFrom)
@ -157,7 +158,7 @@ func ConvertComponentUpdateInfosToComponents(updateInfo ComponentUpdateInfo) (*o
// Op: info.Op,
// Tag: info.Tag,
// 其他字段可根据需要补充
Ts: time.Now(),
TS: time.Now(),
}
return component, nil
}

View File

@ -3,18 +3,25 @@ package network
// FailureResponse define struct of standard failure API response format
type FailureResponse struct {
Code int `json:"code" example:"500"`
Msg string `json:"msg" example:"failed to get recommend data from redis"`
Code int `json:"code" example:"3000"`
Msg string `json:"msg" example:"process completed with partial failures"`
Payload any `json:"payload" swaggertype:"object"`
}
// SuccessResponse define struct of standard successful API response format
type SuccessResponse struct {
Code int `json:"code" example:"200"`
Msg string `json:"msg" example:"success"`
Code int `json:"code" example:"2000"`
Msg string `json:"msg" example:"process completed"`
Payload any `json:"payload" swaggertype:"object"`
}
// WSResponse define struct of standard websocket API response format
type WSResponse struct {
Code int `json:"code" example:"2000"`
Msg string `json:"msg" example:"process completed"`
Payload any `json:"payload,omitempty" swaggertype:"object"`
}
// MeasurementRecommendPayload define struct of represents the data payload for the successful recommendation response.
type MeasurementRecommendPayload struct {
Input string `json:"input" example:"transformfeeder1_220."`
@ -26,7 +33,7 @@ type MeasurementRecommendPayload struct {
// TargetResult define struct of target item in real time data subscription response payload
type TargetResult struct {
ID string `json:"id" example:"grid1.zone1.station1.ns1.tag1.transformfeeder1_220.I_A_rms"`
Code string `json:"code" example:"1001"`
Code int `json:"code" example:"20000"`
Msg string `json:"msg" example:"subscription success"`
}

View File

@ -85,7 +85,6 @@ func (a *AsyncMotor) TableName() string {
// SetComponentID func implement BasicModelInterface interface
func (a *AsyncMotor) SetComponentID(componentID int64) {
a.ComponentID = componentID
return
}
// ReturnTableName func implement BasicModelInterface interface

129
orm/async_task.go Normal file
View File

@ -0,0 +1,129 @@
// Package orm define database data struct
package orm
import (
"github.com/gofrs/uuid"
)
// AsyncTaskType defines the type of asynchronous task
type AsyncTaskType string
const (
// AsyncTaskTypeTopologyAnalysis represents topology analysis task
AsyncTaskTypeTopologyAnalysis AsyncTaskType = "TOPOLOGY_ANALYSIS"
// AsyncTaskTypePerformanceAnalysis represents performance analysis task
AsyncTaskTypePerformanceAnalysis AsyncTaskType = "PERFORMANCE_ANALYSIS"
// AsyncTaskTypeEventAnalysis represents event analysis task
AsyncTaskTypeEventAnalysis AsyncTaskType = "EVENT_ANALYSIS"
// AsyncTaskTypeBatchImport represents batch import task
AsyncTaskTypeBatchImport AsyncTaskType = "BATCH_IMPORT"
// AsyncTaskTypeTest represents test task for system verification
AsyncTaskTypeTest AsyncTaskType = "TEST"
)
// AsyncTaskStatus defines the status of asynchronous task
type AsyncTaskStatus string
const (
// AsyncTaskStatusSubmitted represents task has been submitted to queue
AsyncTaskStatusSubmitted AsyncTaskStatus = "SUBMITTED"
// AsyncTaskStatusRunning represents task is currently executing
AsyncTaskStatusRunning AsyncTaskStatus = "RUNNING"
// AsyncTaskStatusCompleted represents task completed successfully
AsyncTaskStatusCompleted AsyncTaskStatus = "COMPLETED"
// AsyncTaskStatusFailed represents task failed with error
AsyncTaskStatusFailed AsyncTaskStatus = "FAILED"
)
// AsyncTask defines the core task entity stored in database for task lifecycle tracking
type AsyncTask struct {
TaskID uuid.UUID `gorm:"column:task_id;primaryKey;type:uuid;default:gen_random_uuid()"`
TaskType AsyncTaskType `gorm:"column:task_type;type:varchar(50);not null;index"`
Status AsyncTaskStatus `gorm:"column:status;type:varchar(20);not null;index"`
Params JSONMap `gorm:"column:params;type:jsonb"`
CreatedAt int64 `gorm:"column:created_at;not null;index"`
FinishedAt *int64 `gorm:"column:finished_at;index"`
StartedAt *int64 `gorm:"column:started_at;index"`
ExecutionTime *int64 `gorm:"column:execution_time"`
Progress *int `gorm:"column:progress"` // 0-100, nullable
RetryCount int `gorm:"column:retry_count;default:0"`
MaxRetryCount int `gorm:"column:max_retry_count;default:3"`
NextRetryTime *int64 `gorm:"column:next_retry_time;index"`
RetryDelay int `gorm:"column:retry_delay;default:5000"`
Priority int `gorm:"column:priority;default:5;index"`
QueueName string `gorm:"column:queue_name;type:varchar(100);default:'default'"`
WorkerID *string `gorm:"column:worker_id;type:varchar(50)"`
FailureReason *string `gorm:"column:failure_reason;type:text"`
StackTrace *string `gorm:"column:stack_trace;type:text"`
CreatedBy *string `gorm:"column:created_by;type:varchar(100)"`
}
// TableName returns the table name for AsyncTask model
func (a *AsyncTask) TableName() string {
return "async_task"
}
// SetSubmitted marks the task as submitted
func (a *AsyncTask) SetSubmitted() {
a.Status = AsyncTaskStatusSubmitted
}
// SetRunning marks the task as running
func (a *AsyncTask) SetRunning() {
a.Status = AsyncTaskStatusRunning
}
// SetCompleted marks the task as completed with finished timestamp
func (a *AsyncTask) SetCompleted(timestamp int64) {
a.Status = AsyncTaskStatusCompleted
a.FinishedAt = &timestamp
a.setProgress(100)
}
// SetFailed marks the task as failed with finished timestamp
func (a *AsyncTask) SetFailed(timestamp int64) {
a.Status = AsyncTaskStatusFailed
a.FinishedAt = &timestamp
}
// setProgress updates the task progress (0-100)
func (a *AsyncTask) setProgress(value int) {
if value < 0 {
value = 0
}
if value > 100 {
value = 100
}
a.Progress = &value
}
// UpdateProgress updates the task progress with validation
func (a *AsyncTask) UpdateProgress(value int) {
a.setProgress(value)
}
// IsCompleted checks if the task is completed
func (a *AsyncTask) IsCompleted() bool {
return a.Status == AsyncTaskStatusCompleted
}
// IsRunning checks if the task is running
func (a *AsyncTask) IsRunning() bool {
return a.Status == AsyncTaskStatusRunning
}
// IsFailed checks if the task failed
func (a *AsyncTask) IsFailed() bool {
return a.Status == AsyncTaskStatusFailed
}
// IsValidTaskType checks if the task type is valid
func IsValidAsyncTaskType(taskType string) bool {
switch AsyncTaskType(taskType) {
case AsyncTaskTypeTopologyAnalysis, AsyncTaskTypePerformanceAnalysis,
AsyncTaskTypeEventAnalysis, AsyncTaskTypeBatchImport, AsyncTaskTypeTest:
return true
default:
return false
}
}

75
orm/async_task_result.go Normal file
View File

@ -0,0 +1,75 @@
// Package orm define database data struct
package orm
import (
"github.com/gofrs/uuid"
)
// AsyncTaskResult stores computation results, separate from AsyncTask model for flexibility
type AsyncTaskResult struct {
TaskID uuid.UUID `gorm:"column:task_id;primaryKey;type:uuid"`
Result JSONMap `gorm:"column:result;type:jsonb"`
ErrorCode *int `gorm:"column:error_code"`
ErrorMessage *string `gorm:"column:error_message;type:text"`
ErrorDetail JSONMap `gorm:"column:error_detail;type:jsonb"`
ExecutionTime int64 `gorm:"column:execution_time;not null;default:0"`
MemoryUsage *int64 `gorm:"column:memory_usage"`
CPUUsage *float64 `gorm:"column:cpu_usage"`
RetryCount int `gorm:"column:retry_count;default:0"`
CompletedAt int64 `gorm:"column:completed_at;not null"`
}
// TableName returns the table name for AsyncTaskResult model
func (a *AsyncTaskResult) TableName() string {
return "async_task_result"
}
// SetSuccess sets the result for successful task execution
func (a *AsyncTaskResult) SetSuccess(result JSONMap) {
a.Result = result
a.ErrorCode = nil
a.ErrorMessage = nil
a.ErrorDetail = nil
}
// SetError sets the error information for failed task execution
func (a *AsyncTaskResult) SetError(code int, message string, detail JSONMap) {
a.Result = nil
a.ErrorCode = &code
a.ErrorMessage = &message
a.ErrorDetail = detail
}
// HasError checks if the task result contains an error
func (a *AsyncTaskResult) HasError() bool {
return a.ErrorCode != nil || a.ErrorMessage != nil
}
// GetErrorCode returns the error code or 0 if no error
func (a *AsyncTaskResult) GetErrorCode() int {
if a.ErrorCode == nil {
return 0
}
return *a.ErrorCode
}
// GetErrorMessage returns the error message or empty string if no error
func (a *AsyncTaskResult) GetErrorMessage() string {
if a.ErrorMessage == nil {
return ""
}
return *a.ErrorMessage
}
// IsSuccess checks if the task execution was successful
func (a *AsyncTaskResult) IsSuccess() bool {
return !a.HasError()
}
// Clear clears all result data
func (a *AsyncTaskResult) Clear() {
a.Result = nil
a.ErrorCode = nil
a.ErrorMessage = nil
a.ErrorDetail = nil
}

View File

@ -72,7 +72,6 @@ func (b *BusbarSection) TableName() string {
// SetComponentID func implement BasicModelInterface interface
func (b *BusbarSection) SetComponentID(componentID int64) {
b.ComponentID = componentID
return
}
// ReturnTableName func implement BasicModelInterface interface

View File

@ -34,7 +34,7 @@ type Bay struct {
DevEtc JSONMap `gorm:"column:dev_etc;type:jsonb;not null;default:'[]'"`
Components []uuid.UUID `gorm:"column:components;type:uuid[];not null;default:'{}'"`
Op int `gorm:"column:op;not null;default:-1"`
Ts time.Time `gorm:"column:ts;type:timestamptz;not null;default:CURRENT_TIMESTAMP"`
TS time.Time `gorm:"column:ts;type:timestamptz;not null;default:CURRENT_TIMESTAMP"`
}
// TableName func respresent return table name of Bay

View File

@ -27,7 +27,7 @@ type Component struct {
Label JSONMap `gorm:"column:label;type:jsonb;not null;default:'{}'"`
Context JSONMap `gorm:"column:context;type:jsonb;not null;default:'{}'"`
Op int `gorm:"column:op;not null;default:-1"`
Ts time.Time `gorm:"column:ts;type:timestamptz;not null;default:current_timestamp;autoCreateTime"`
TS time.Time `gorm:"column:ts;type:timestamptz;not null;default:current_timestamp;autoCreateTime"`
}
// TableName func respresent return table name of Component

View File

@ -12,7 +12,7 @@ type Grid struct {
Name string `gorm:"column:name"`
Description string `gorm:"column:description"`
Op int `gorm:"column:op"`
Ts time.Time `gorm:"column:ts"`
TS time.Time `gorm:"column:ts"`
}
// TableName func respresent return table name of Grid

View File

@ -20,7 +20,7 @@ type Measurement struct {
BayUUID uuid.UUID `gorm:"column:bay_uuid;type:uuid;not null"`
ComponentUUID uuid.UUID `gorm:"column:component_uuid;type:uuid;not null"`
Op int `gorm:"column:op;not null;default:-1"`
Ts time.Time `gorm:"column:ts;type:timestamptz;not null;default:CURRENT_TIMESTAMP"`
TS time.Time `gorm:"column:ts;type:timestamptz;not null;default:CURRENT_TIMESTAMP"`
}
// TableName func respresent return table name of Measurement

View File

@ -12,7 +12,7 @@ type Page struct {
Context JSONMap `gorm:"column:context;type:jsonb;default:'{}'"`
Description string `gorm:"column:description"`
Op int `gorm:"column:op"`
Ts time.Time `gorm:"column:ts"`
TS time.Time `gorm:"column:ts"`
}
// TableName func respresent return table name of Page

View File

@ -14,7 +14,7 @@ type Station struct {
Description string `gorm:"column:description"`
IsLocal bool `gorm:"column:is_local"`
Op int `gorm:"column:op"`
Ts time.Time `gorm:"column:ts"`
TS time.Time `gorm:"column:ts"`
}
// TableName func respresent return table name of Station

View File

@ -16,7 +16,7 @@ type Topologic struct {
Flag int `gorm:"column:flag"`
Description string `gorm:"column:description;size:512;not null;default:''"`
Op int `gorm:"column:op;not null;default:-1"`
Ts time.Time `gorm:"column:ts;type:timestamptz;not null;default:CURRENT_TIMESTAMP"`
TS time.Time `gorm:"column:ts;type:timestamptz;not null;default:CURRENT_TIMESTAMP"`
}
// TableName func respresent return table name of Page

View File

@ -13,7 +13,7 @@ type Zone struct {
Name string `gorm:"column:name"`
Description string `gorm:"column:description"`
Op int `gorm:"column:op"`
Ts time.Time `gorm:"column:ts"`
TS time.Time `gorm:"column:ts"`
}
// TableName func respresent return table name of Zone

View File

@ -14,7 +14,7 @@ type Demo struct {
UIAlarm float32 `gorm:"column:ui_alarm" json:"ui_alarm"` // 低电流告警值
OIAlarm float32 `gorm:"column:oi_alarm" json:"oi_alarm"` // 高电流告警值
Op int `gorm:"column:op" json:"op"` // 操作人 ID
Ts time.Time `gorm:"column:ts" json:"ts"` // 操作时间
TS time.Time `gorm:"column:ts" json:"ts"` // 操作时间
}
// TableName func respresent return table name of busbar section
@ -25,7 +25,6 @@ func (d *Demo) TableName() string {
// SetComponentID func implement BasicModelInterface interface
func (d *Demo) SetComponentID(componentID int64) {
d.ComponentID = componentID
return
}
// ReturnTableName func implement BasicModelInterface interface

View File

@ -5,11 +5,11 @@ import (
"fmt"
"time"
"modelRT/alert"
"modelRT/config"
"modelRT/constants"
"modelRT/diagram"
"modelRT/logger"
"modelRT/real-time-data/alert"
"github.com/panjf2000/ants/v2"
)

View File

@ -9,7 +9,8 @@ import (
"modelRT/constants"
"modelRT/logger"
"modelRT/real-time-data/event"
"modelRT/mq"
"modelRT/mq/event"
)
// RealTimeAnalyzer define interface general methods for real-time data analysis and event triggering
@ -26,6 +27,13 @@ type teEventThresholds struct {
isFloatCause bool
}
type teBreachTrigger struct {
breachType string
triggered bool
triggeredValues []float64
eventOpts []event.EventOption
}
// parseTEThresholds define func to parse telemetry thresholds by casue map
func parseTEThresholds(cause map[string]any) (teEventThresholds, error) {
t := teEventThresholds{}
@ -84,60 +92,74 @@ func (t *TEAnalyzer) AnalyzeAndTriggerEvent(ctx context.Context, conf *ComputeCo
// analyzeTEDataLogic define func to processing telemetry data and event triggering
func analyzeTEDataLogic(ctx context.Context, conf *ComputeConfig, thresholds teEventThresholds, realTimeValues []float64) {
windowSize := conf.minBreachCount
if windowSize <= 0 {
logger.Error(ctx, "variable minBreachCount is invalid or zero, analysis skipped", "minBreachCount", windowSize)
dataLen := len(realTimeValues)
if dataLen < windowSize || windowSize <= 0 {
return
}
// mark whether any events have been triggered in this batch
var eventTriggered bool
breachTriggers := map[string]bool{
"up": false, "upup": false, "down": false, "downdown": false,
statusArray := make([]string, dataLen)
for i, val := range realTimeValues {
statusArray[i] = getTEBreachType(val, thresholds)
}
// implement slide window to determine breach counts
for i := 0; i <= len(realTimeValues)-windowSize; i++ {
window := realTimeValues[i : i+windowSize]
firstValueBreachType := getTEBreachType(window[0], thresholds)
breachTriggers := make(map[string]teBreachTrigger)
for i := 0; i <= dataLen-windowSize; i++ {
firstBreachType := statusArray[i]
if firstValueBreachType == "" {
// if the first value in the window does not breach, skip this window directly
if firstBreachType == "" {
continue
}
allMatch := true
for j := 1; j < windowSize; j++ {
currentValueBreachType := getTEBreachType(window[j], thresholds)
if currentValueBreachType != firstValueBreachType {
if statusArray[i+j] != firstBreachType {
allMatch = false
break
}
}
if allMatch {
triggerValues := realTimeValues[i : i+windowSize]
// in the case of a continuous sequence of out-of-limit events, check whether this type of event has already been triggered in the current batch of data
if !breachTriggers[firstValueBreachType] {
// trigger event
logger.Warn(ctx, "event triggered by sliding window", "breach_type", firstValueBreachType, "value", window[windowSize-1])
_, exists := breachTriggers[firstBreachType]
if !exists {
logger.Warn(ctx, "event triggered by sliding window",
"breach_type", firstBreachType,
"trigger_values", triggerValues)
breachTriggers[firstValueBreachType] = true
eventTriggered = true
// build Options
opts := []event.EventOption{
event.WithConditionValue(triggerValues, conf.Cause),
event.WithTEAnalysisResult(firstBreachType),
event.WithCategory(constants.EventWarnUpDownLimitCategroy),
// TODO 生成 operations并考虑如何放入 event 中
// event.WithOperations(nil)
}
breachTriggers[firstBreachType] = teBreachTrigger{
breachType: firstBreachType,
triggered: false,
triggeredValues: triggerValues,
eventOpts: opts,
}
}
}
}
if eventTriggered {
command, content := genTEEventCommandAndContent(ctx, conf.Action)
// TODO 考虑 content 是否可以为空,先期不允许
if command == "" || content == "" {
logger.Error(ctx, "generate telemetry evnet command or content failed", "action", conf.Action, "command", command, "content", content)
for breachType, trigger := range breachTriggers {
// trigger Action
command, mainBody := genTEEventCommandAndMainBody(ctx, conf.Action)
eventName := fmt.Sprintf("telemetry_%s_%s_Breach_Event", mainBody, breachType)
eventRecord, err := event.TriggerEventAction(ctx, command, eventName, trigger.eventOpts...)
if err != nil {
logger.Error(ctx, "trigger event action failed", "error", err)
return
}
event.TriggerEventAction(ctx, command, content)
return
mq.MsgChan <- eventRecord
}
}
func genTEEventCommandAndContent(ctx context.Context, action map[string]any) (command string, content string) {
func genTEEventCommandAndMainBody(ctx context.Context, action map[string]any) (command string, mainBody string) {
cmdValue, exist := action["command"]
if !exist {
logger.Error(ctx, "can not find command variable into action map", "action", action)
@ -185,7 +207,7 @@ type tiEventThresholds struct {
isFloatCause bool
}
// parseTEThresholds define func to parse telesignal thresholds by casue map
// parseTIThresholds define func to parse telesignal thresholds by casue map
func parseTIThresholds(cause map[string]any) (tiEventThresholds, error) {
edgeKey := "edge"
t := tiEventThresholds{
@ -211,11 +233,12 @@ func parseTIThresholds(cause map[string]any) (tiEventThresholds, error) {
// getTIBreachType define func to determine which type of out-of-limit the telesignal real time data belongs to
func getTIBreachType(currentValue float64, previousValue float64, t tiEventThresholds) string {
if t.edge == constants.TelesignalRaising {
switch t.edge {
case constants.TelesignalRaising:
if previousValue == 0.0 && currentValue == 1.0 {
return constants.TIBreachTriggerType
}
} else if t.edge == constants.TelesignalFalling {
case constants.TelesignalFalling:
if previousValue == 1.0 && currentValue == 0.0 {
return constants.TIBreachTriggerType
}
@ -297,18 +320,22 @@ func analyzeTIDataLogic(ctx context.Context, conf *ComputeConfig, thresholds tiE
}
if eventTriggered {
command, content := genTIEventCommandAndContent(conf.Action)
// TODO 考虑 content 是否可以为空,先期不允许
if command == "" || content == "" {
logger.Error(ctx, "generate telemetry evnet command or content failed", "action", conf.Action, "command", command, "content", content)
command, mainBody := genTIEventCommandAndMainBody(conf.Action)
if command == "" || mainBody == "" {
logger.Error(ctx, "generate telemetry evnet command or content failed", "action", conf.Action, "command", command, "main_body", mainBody)
return
}
event.TriggerEventAction(ctx, command, content)
eventRecord, err := event.TriggerEventAction(ctx, command, mainBody)
if err != nil {
logger.Error(ctx, "trigger event action failed", "error", err)
return
}
mq.MsgChan <- eventRecord
return
}
}
func genTIEventCommandAndContent(action map[string]any) (command string, content string) {
func genTIEventCommandAndMainBody(action map[string]any) (command string, mainBody string) {
cmdValue, exist := action["command"]
if !exist {
return "", ""

View File

@ -1,74 +0,0 @@
// Package event define real time data evnet operation functions
package event
import (
"context"
"modelRT/logger"
)
type actionHandler func(ctx context.Context, content string) error
// actionDispatchMap define variable to store all action handler into map
var actionDispatchMap = map[string]actionHandler{
"info": handleInfoAction,
"warning": handleWarningAction,
"error": handleErrorAction,
"critical": handleCriticalAction,
"exception": handleExceptionAction,
}
// TriggerEventAction define func to trigger event by action in compute config
func TriggerEventAction(ctx context.Context, command string, content string) {
handler, exists := actionDispatchMap[command]
if !exists {
logger.Error(ctx, "unknown action command", "command", command)
return
}
err := handler(ctx, content)
if err != nil {
logger.Error(ctx, "action handler failed", "command", command, "content", content, "error", err)
return
}
logger.Info(ctx, "action handler success", "command", command, "content", content)
}
func handleInfoAction(ctx context.Context, content string) error {
// 实际执行发送警告、记录日志等操作
actionParams := content
// ... logic to send info level event using actionParams ...
logger.Warn(ctx, "trigger info event", "message", actionParams)
return nil
}
func handleWarningAction(ctx context.Context, content string) error {
// 实际执行发送警告、记录日志等操作
actionParams := content
// ... logic to send warning level event using actionParams ...
logger.Warn(ctx, "trigger warning event", "message", actionParams)
return nil
}
func handleErrorAction(ctx context.Context, content string) error {
// 实际执行发送警告、记录日志等操作
actionParams := content
// ... logic to send error level event using actionParams ...
logger.Warn(ctx, "trigger error event", "message", actionParams)
return nil
}
func handleCriticalAction(ctx context.Context, content string) error {
// 实际执行发送警告、记录日志等操作
actionParams := content
// ... logic to send critical level event using actionParams ...
logger.Warn(ctx, "trigger critical event", "message", actionParams)
return nil
}
func handleExceptionAction(ctx context.Context, content string) error {
// 实际执行发送警告、记录日志等操作
actionParams := content
// ... logic to send except level event using actionParams ...
logger.Warn(ctx, "trigger except event", "message", actionParams)
return nil
}

View File

@ -1,400 +0,0 @@
// Package realtimedata define real time data operation functions
package realtimedata
import (
"context"
"errors"
"fmt"
"time"
"modelRT/constants"
"modelRT/diagram"
"modelRT/logger"
"modelRT/model"
"modelRT/network"
"modelRT/orm"
"modelRT/util"
)
var (
// RealTimeDataChan define channel of real time data receive
RealTimeDataChan chan network.RealTimeDataReceiveRequest
globalComputeState *MeasComputeState
)
func init() {
RealTimeDataChan = make(chan network.RealTimeDataReceiveRequest, 100)
globalComputeState = NewMeasComputeState()
}
// StartRealTimeDataComputing define func to start real time data process goroutines by measurement info
func StartRealTimeDataComputing(ctx context.Context, measurements []orm.Measurement) {
for _, measurement := range measurements {
enableValue, exist := measurement.EventPlan["enable"]
enable, ok := enableValue.(bool)
if !exist || !enable {
logger.Info(ctx, "measurement object do not need real time data computing", "measurement_uuid", measurement.ComponentUUID)
continue
}
if !ok {
logger.Error(ctx, "covert enable variable to boolean type failed", "measurement_uuid", measurement.ComponentUUID, "enable", enableValue)
continue
}
conf, err := initComputeConfig(measurement)
if err != nil {
logger.Error(ctx, "failed to initialize real time compute config", "measurement_uuid", measurement.ComponentUUID, "error", err)
continue
}
if conf == nil {
logger.Info(ctx, "measurement object is disabled or does not require real time computing", "measurement_uuid", measurement.ComponentUUID)
continue
}
uuidStr := measurement.ComponentUUID.String()
enrichedCtx := context.WithValue(ctx, constants.MeasurementUUIDKey, uuidStr)
conf.StopGchan = make(chan struct{})
globalComputeState.Store(uuidStr, conf)
logger.Info(ctx, "starting real time data computing for measurement", "measurement_uuid", measurement.ComponentUUID)
go continuousComputation(enrichedCtx, conf)
}
}
func initComputeConfig(measurement orm.Measurement) (*ComputeConfig, error) {
var err error
enableValue, exist := measurement.EventPlan["enable"]
enable, ok := enableValue.(bool)
if !exist {
return nil, nil
}
if !ok {
return nil, fmt.Errorf("field enable can not be converted to boolean, found type: %T", enableValue)
}
if !enable {
return nil, nil
}
conf := &ComputeConfig{}
causeValue, exist := measurement.EventPlan["cause"]
if !exist {
return nil, errors.New("missing required field cause")
}
cause, ok := causeValue.(map[string]any)
if !ok {
return nil, fmt.Errorf("field cause can not be converted to map[string]any, found type: %T", causeValue)
}
conf.Cause, err = processCauseMap(cause)
if err != nil {
return nil, fmt.Errorf("parse content of field cause failed:%w", err)
}
actionValue, exist := measurement.EventPlan["action"]
if !exist {
return nil, errors.New("missing required field action")
}
action, ok := actionValue.(map[string]any)
if !ok {
return nil, fmt.Errorf("field action can not be converted to map[string]any, found type: %T", actionValue)
}
conf.Action = action
queryKey, err := model.GenerateMeasureIdentifier(measurement.DataSource)
if err != nil {
return nil, fmt.Errorf("generate redis query key by datasource failed: %w", err)
}
conf.QueryKey = queryKey
conf.DataSize = int64(measurement.Size)
// TODO use constant values for temporary settings
conf.minBreachCount = constants.MinBreachCount
// TODO 后续优化 duration 创建方式
conf.Duration = 10
isFloatCause := false
if _, exists := conf.Cause["up"]; exists {
isFloatCause = true
} else if _, exists := conf.Cause["down"]; exists {
isFloatCause = true
} else if _, exists := conf.Cause["upup"]; exists {
isFloatCause = true
} else if _, exists := conf.Cause["downdown"]; exists {
isFloatCause = true
}
if isFloatCause {
// te config
teThresholds, err := parseTEThresholds(conf.Cause)
if err != nil {
return nil, fmt.Errorf("failed to parse telemetry thresholds: %w", err)
}
conf.Analyzer = &TEAnalyzer{Thresholds: teThresholds}
} else {
// ti config
tiThresholds, err := parseTIThresholds(conf.Cause)
if err != nil {
return nil, fmt.Errorf("failed to parse telesignal thresholds: %w", err)
}
conf.Analyzer = &TIAnalyzer{Thresholds: tiThresholds}
}
return conf, nil
}
func processCauseMap(data map[string]any) (map[string]any, error) {
causeResult := make(map[string]any)
keysToExtract := []string{"up", "down", "upup", "downdown"}
var foundFloatKey bool
for _, key := range keysToExtract {
if value, exists := data[key]; exists {
foundFloatKey = true
// check value type
if floatVal, ok := value.(float64); ok {
causeResult[key] = floatVal
} else {
return nil, fmt.Errorf("key:%s already exists but type is incorrect.expected float64, actual %T", key, value)
}
}
}
if foundFloatKey == true {
return causeResult, nil
}
edgeKey := "edge"
if value, exists := data[edgeKey]; exists {
if stringVal, ok := value.(string); ok {
switch stringVal {
case "raising":
fallthrough
case "falling":
causeResult[edgeKey] = stringVal
default:
return nil, fmt.Errorf("key:%s value is incorrect,actual value %s", edgeKey, value)
}
} else {
return nil, fmt.Errorf("key:%s already exists but type is incorrect.expected string, actual %T", edgeKey, value)
}
} else {
return nil, fmt.Errorf("key:%s do not exists", edgeKey)
}
return nil, fmt.Errorf("cause map is invalid: missing required keys (%v) or '%s'", keysToExtract, edgeKey)
}
func continuousComputation(ctx context.Context, conf *ComputeConfig) {
client := diagram.NewRedisClient()
uuid, _ := ctx.Value(constants.MeasurementUUIDKey).(string)
duration := util.SecondsToDuration(conf.Duration)
ticker := time.NewTicker(duration)
defer ticker.Stop()
for {
select {
case <-conf.StopGchan:
logger.Info(ctx, "continuous computing groutine stopped by local StopGchan", "uuid", uuid)
return
case <-ctx.Done():
logger.Info(ctx, "continuous computing goroutine stopped by parent context done signal")
return
case <-ticker.C:
members, err := client.QueryByZRangeByLex(ctx, conf.QueryKey, conf.DataSize)
if err != nil {
logger.Error(ctx, "query real time data from redis failed", "key", conf.QueryKey, "error", err)
continue
}
realTimedatas := util.ConvertZSetMembersToFloat64(members)
if conf.Analyzer != nil {
conf.Analyzer.AnalyzeAndTriggerEvent(ctx, conf, realTimedatas)
} else {
logger.Error(ctx, "analyzer is not initialized for this measurement", "uuid", uuid)
}
}
}
}
// // ReceiveChan define func to real time data receive and process
// func ReceiveChan(ctx context.Context, consumerConfig *kafka.ConfigMap, topics []string, duration float32) {
// consumer, err := kafka.NewConsumer(consumerConfig)
// if err != nil {
// logger.Error(ctx, "create kafka consumer failed", "error", err)
// return
// }
// defer consumer.Close()
// err = consumer.SubscribeTopics(topics, nil)
// if err != nil {
// logger.Error(ctx, "subscribe kafka topics failed", "topic", topics, "error", err)
// return
// }
// batchSize := 100
// batchTimeout := util.SecondsToDuration(duration)
// messages := make([]*kafka.Message, 0, batchSize)
// lastCommit := time.Now()
// logger.Info(ctx, "start consuming from kafka", "topic", topics)
// for {
// select {
// case <-ctx.Done():
// logger.Info(ctx, "stop real time data computing by context cancel")
// return
// case realTimeData := <-RealTimeDataChan:
// componentUUID := realTimeData.PayLoad.ComponentUUID
// component, err := diagram.GetComponentMap(componentUUID)
// if err != nil {
// logger.Error(ctx, "query component info from diagram map by componet id failed", "component_uuid", componentUUID, "error", err)
// continue
// }
// componentType := component.Type
// if componentType != constants.DemoType {
// logger.Error(ctx, "can not process real time data of component type not equal DemoType", "component_uuid", componentUUID)
// continue
// }
// var anchorName string
// var compareValUpperLimit, compareValLowerLimit float64
// var anchorRealTimeData []float64
// var calculateFunc func(archorValue float64, args ...float64) float64
// // calculateFunc, params := config.SelectAnchorCalculateFuncAndParams(componentType, anchorName, componentData)
// for _, param := range realTimeData.PayLoad.Values {
// anchorRealTimeData = append(anchorRealTimeData, param.Value)
// }
// anchorConfig := config.AnchorParamConfig{
// AnchorParamBaseConfig: config.AnchorParamBaseConfig{
// ComponentUUID: componentUUID,
// AnchorName: anchorName,
// CompareValUpperLimit: compareValUpperLimit,
// CompareValLowerLimit: compareValLowerLimit,
// AnchorRealTimeData: anchorRealTimeData,
// },
// CalculateFunc: calculateFunc,
// CalculateParams: []float64{},
// }
// anchorChan, err := pool.GetAnchorParamChan(ctx, componentUUID)
// if err != nil {
// logger.Error(ctx, "get anchor param chan failed", "component_uuid", componentUUID, "error", err)
// continue
// }
// anchorChan <- anchorConfig
// default:
// msg, err := consumer.ReadMessage(batchTimeout)
// if err != nil {
// if err.(kafka.Error).Code() == kafka.ErrTimedOut {
// // process accumulated messages when timeout
// if len(messages) > 0 {
// processMessageBatch(ctx, messages)
// consumer.Commit()
// messages = messages[:0]
// }
// continue
// }
// logger.Error(ctx, "read message from kafka failed", "error", err, "msg", msg)
// continue
// }
// messages = append(messages, msg)
// // process messages when batch size or timeout period is reached
// if len(messages) >= batchSize || time.Since(lastCommit) >= batchTimeout {
// processMessageBatch(ctx, messages)
// consumer.Commit()
// messages = messages[:0]
// lastCommit = time.Now()
// }
// }
// }
// }
// type realTimeDataPayload struct {
// ComponentUUID string
// Values []float64
// }
// type realTimeData struct {
// Payload realTimeDataPayload
// }
// func parseKafkaMessage(msgValue []byte) (*realTimeData, error) {
// var realTimeData realTimeData
// err := json.Unmarshal(msgValue, &realTimeData)
// if err != nil {
// return nil, fmt.Errorf("unmarshal real time data failed: %w", err)
// }
// return &realTimeData, nil
// }
// func processRealTimeData(ctx context.Context, realTimeData *realTimeData) {
// componentUUID := realTimeData.Payload.ComponentUUID
// component, err := diagram.GetComponentMap(componentUUID)
// if err != nil {
// logger.Error(ctx, "query component info from diagram map by component id failed",
// "component_uuid", componentUUID, "error", err)
// return
// }
// componentType := component.Type
// if componentType != constants.DemoType {
// logger.Error(ctx, "can not process real time data of component type not equal DemoType",
// "component_uuid", componentUUID)
// return
// }
// var anchorName string
// var compareValUpperLimit, compareValLowerLimit float64
// var anchorRealTimeData []float64
// var calculateFunc func(archorValue float64, args ...float64) float64
// for _, param := range realTimeData.Payload.Values {
// anchorRealTimeData = append(anchorRealTimeData, param)
// }
// anchorConfig := config.AnchorParamConfig{
// AnchorParamBaseConfig: config.AnchorParamBaseConfig{
// ComponentUUID: componentUUID,
// AnchorName: anchorName,
// CompareValUpperLimit: compareValUpperLimit,
// CompareValLowerLimit: compareValLowerLimit,
// AnchorRealTimeData: anchorRealTimeData,
// },
// CalculateFunc: calculateFunc,
// CalculateParams: []float64{},
// }
// anchorChan, err := pool.GetAnchorParamChan(ctx, componentUUID)
// if err != nil {
// logger.Error(ctx, "get anchor param chan failed",
// "component_uuid", componentUUID, "error", err)
// return
// }
// select {
// case anchorChan <- anchorConfig:
// case <-ctx.Done():
// logger.Info(ctx, "context done while sending to anchor chan")
// case <-time.After(5 * time.Second):
// logger.Error(ctx, "timeout sending to anchor chan", "component_uuid", componentUUID)
// }
// }
// // processMessageBatch define func to bathc process kafka message
// func processMessageBatch(ctx context.Context, messages []*kafka.Message) {
// for _, msg := range messages {
// realTimeData, err := parseKafkaMessage(msg.Value)
// if err != nil {
// logger.Error(ctx, "parse kafka message failed", "error", err, "msg", msg)
// continue
// }
// go processRealTimeData(ctx, realTimeData)
// }
// }

View File

@ -0,0 +1,229 @@
// Package realtimedata define real time data operation functions
package realtimedata
import (
"context"
"errors"
"fmt"
"time"
"modelRT/constants"
"modelRT/diagram"
"modelRT/logger"
"modelRT/model"
"modelRT/network"
"modelRT/orm"
"modelRT/util"
)
var (
// RealTimeDataChan define channel of real time data receive
RealTimeDataChan chan network.RealTimeDataReceiveRequest
globalComputeState *MeasComputeState
)
func init() {
RealTimeDataChan = make(chan network.RealTimeDataReceiveRequest, 100)
globalComputeState = NewMeasComputeState()
}
// StartComputingRealTimeDataLimit define func to start compute real time data up or down limit process goroutines by measurement info
func StartComputingRealTimeDataLimit(ctx context.Context, measurements []orm.Measurement) {
for _, measurement := range measurements {
enableValue, exist := measurement.EventPlan["enable"]
enable, ok := enableValue.(bool)
if !exist || !enable {
logger.Info(ctx, "measurement object do not need real time data computing", "measurement_uuid", measurement.ComponentUUID)
continue
}
if !ok {
logger.Error(ctx, "covert enable variable to boolean type failed", "measurement_uuid", measurement.ComponentUUID, "enable", enableValue)
continue
}
conf, err := initComputeConfig(measurement)
if err != nil {
logger.Error(ctx, "failed to initialize real time compute config", "measurement_uuid", measurement.ComponentUUID, "error", err)
continue
}
if conf == nil {
logger.Info(ctx, "measurement object is disabled or does not require real time computing", "measurement_uuid", measurement.ComponentUUID)
continue
}
uuidStr := measurement.ComponentUUID.String()
enrichedCtx := context.WithValue(ctx, constants.MeasurementUUIDKey, uuidStr)
conf.StopGchan = make(chan struct{})
globalComputeState.Store(uuidStr, conf)
logger.Info(ctx, "starting computing real time data limit for measurement", "measurement_uuid", measurement.ComponentUUID)
go continuousComputation(enrichedCtx, conf)
}
}
func initComputeConfig(measurement orm.Measurement) (*ComputeConfig, error) {
var err error
enableValue, exist := measurement.EventPlan["enable"]
enable, ok := enableValue.(bool)
if !exist {
return nil, nil
}
if !ok {
return nil, fmt.Errorf("field enable can not be converted to boolean, found type: %T", enableValue)
}
if !enable {
return nil, nil
}
conf := &ComputeConfig{}
causeValue, exist := measurement.EventPlan["cause"]
if !exist {
return nil, errors.New("missing required field cause")
}
cause, ok := causeValue.(map[string]any)
if !ok {
return nil, fmt.Errorf("field cause can not be converted to map[string]any, found type: %T", causeValue)
}
conf.Cause, err = processCauseMap(cause)
if err != nil {
return nil, fmt.Errorf("parse content of field cause failed:%w", err)
}
actionValue, exist := measurement.EventPlan["action"]
if !exist {
return nil, errors.New("missing required field action")
}
action, ok := actionValue.(map[string]any)
if !ok {
return nil, fmt.Errorf("field action can not be converted to map[string]any, found type: %T", actionValue)
}
conf.Action = action
queryKey, err := model.GenerateMeasureIdentifier(measurement.DataSource)
if err != nil {
return nil, fmt.Errorf("generate redis query key by datasource failed: %w", err)
}
conf.QueryKey = queryKey
conf.DataSize = int64(measurement.Size)
// TODO use constant values for temporary settings
conf.minBreachCount = constants.MinBreachCount
// TODO 后续优化 duration 创建方式
conf.Duration = 10
isFloatCause := false
if _, exists := conf.Cause["up"]; exists {
isFloatCause = true
} else if _, exists := conf.Cause["down"]; exists {
isFloatCause = true
} else if _, exists := conf.Cause["upup"]; exists {
isFloatCause = true
} else if _, exists := conf.Cause["downdown"]; exists {
isFloatCause = true
}
if isFloatCause {
// te config
teThresholds, err := parseTEThresholds(conf.Cause)
if err != nil {
return nil, fmt.Errorf("failed to parse telemetry thresholds: %w", err)
}
conf.Analyzer = &TEAnalyzer{Thresholds: teThresholds}
} else {
// ti config
tiThresholds, err := parseTIThresholds(conf.Cause)
if err != nil {
return nil, fmt.Errorf("failed to parse telesignal thresholds: %w", err)
}
conf.Analyzer = &TIAnalyzer{Thresholds: tiThresholds}
}
return conf, nil
}
func processCauseMap(data map[string]any) (map[string]any, error) {
causeResult := make(map[string]any)
keysToExtract := []string{"up", "down", "upup", "downdown"}
var foundFloatKey bool
for _, key := range keysToExtract {
if value, exists := data[key]; exists {
foundFloatKey = true
// check value type
if floatVal, ok := value.(float64); ok {
causeResult[key] = floatVal
} else {
return nil, fmt.Errorf("key:%s already exists but type is incorrect.expected float64, actual %T", key, value)
}
}
}
if foundFloatKey {
return causeResult, nil
}
edgeKey := "edge"
if value, exists := data[edgeKey]; exists {
if stringVal, ok := value.(string); ok {
switch stringVal {
case "raising":
fallthrough
case "falling":
causeResult[edgeKey] = stringVal
default:
return nil, fmt.Errorf("key:%s value is incorrect,actual value %s", edgeKey, value)
}
} else {
return nil, fmt.Errorf("key:%s already exists but type is incorrect.expected string, actual %T", edgeKey, value)
}
} else {
return nil, fmt.Errorf("key:%s do not exists", edgeKey)
}
return nil, fmt.Errorf("cause map is invalid: missing required keys (%v) or '%s'", keysToExtract, edgeKey)
}
func continuousComputation(ctx context.Context, conf *ComputeConfig) {
client := diagram.NewRedisClient()
uuid, _ := ctx.Value(constants.MeasurementUUIDKey).(string)
duration := util.SecondsToDuration(conf.Duration)
ticker := time.NewTicker(duration)
defer ticker.Stop()
for {
select {
case <-conf.StopGchan:
logger.Info(ctx, "continuous computing groutine stopped by local StopGchan", "uuid", uuid)
return
case <-ctx.Done():
logger.Info(ctx, "continuous computing goroutine stopped by parent context done signal")
return
case <-ticker.C:
queryCtx, cancel := context.WithTimeout(ctx, 2*time.Second)
members, err := client.QueryByZRange(queryCtx, conf.QueryKey, conf.DataSize)
cancel()
if err != nil {
logger.Error(ctx, "query real time data from redis failed", "key", conf.QueryKey, "error", err)
continue
}
realTimedatas := util.ConvertZSetMembersToFloat64(members)
if len(realTimedatas) == 0 {
logger.Info(ctx, "no real time data queried from redis, skip this computation cycle", "key", conf.QueryKey)
continue
}
if conf.Analyzer != nil {
conf.Analyzer.AnalyzeAndTriggerEvent(ctx, conf, realTimedatas)
} else {
logger.Error(ctx, "analyzer is not initialized for this measurement", "uuid", uuid)
}
}
}
}

32
router/async_task.go Normal file
View File

@ -0,0 +1,32 @@
// Package router provides router config
package router
import (
"modelRT/handler"
"github.com/gin-gonic/gin"
)
// registerAsyncTaskRoutes define func of register async task routes
func registerAsyncTaskRoutes(rg *gin.RouterGroup, middlewares ...gin.HandlerFunc) {
g := rg.Group("/task/")
g.Use(middlewares...)
// Async task creation
g.POST("async", handler.AsyncTaskCreateHandler)
// Async task result query
g.GET("async/results", handler.AsyncTaskResultQueryHandler)
// Async task detail query
g.GET("async/:task_id", handler.AsyncTaskResultDetailHandler)
// Async task cancellation
g.POST("async/:task_id/cancel", handler.AsyncTaskCancelHandler)
// Internal APIs for worker updates (not exposed to external users)
internal := g.Group("internal/")
internal.Use(middlewares...)
internal.POST("async/progress", handler.AsyncTaskProgressUpdateHandler)
internal.POST("async/status", handler.AsyncTaskStatusUpdateHandler)
}

View File

@ -27,4 +27,5 @@ func RegisterRoutes(engine *gin.Engine, clientToken string) {
registerDataRoutes(routeGroup)
registerMonitorRoutes(routeGroup)
registerComponentRoutes(routeGroup, middleware.SetTokenMiddleware(clientToken))
registerAsyncTaskRoutes(routeGroup, middleware.SetTokenMiddleware(clientToken))
}

View File

@ -1,97 +0,0 @@
package sharememory
import (
"fmt"
"unsafe"
"modelRT/orm"
"golang.org/x/sys/unix"
)
// CreateShareMemory defines a function to create a shared memory
func CreateShareMemory(key uintptr, structSize uintptr) (uintptr, error) {
// logger := logger.GetLoggerInstance()
// create shared memory
shmID, _, err := unix.Syscall(unix.SYS_SHMGET, key, structSize, unix.IPC_CREAT|0o666)
if err != 0 {
// logger.Error(fmt.Sprintf("create shared memory by key %v failed:", key), zap.Error(err))
return 0, fmt.Errorf("create shared memory failed:%w", err)
}
// attach shared memory
shmAddr, _, err := unix.Syscall(unix.SYS_SHMAT, shmID, 0, 0)
if err != 0 {
// logger.Error(fmt.Sprintf("attach shared memory by shmID %v failed:", shmID), zap.Error(err))
return 0, fmt.Errorf("attach shared memory failed:%w", err)
}
return shmAddr, nil
}
// ReadComponentFromShareMemory defines a function to read component value from shared memory
func ReadComponentFromShareMemory(key uintptr, componentInfo *orm.Component) error {
structSize := unsafe.Sizeof(orm.Component{})
shmID, _, err := unix.Syscall(unix.SYS_SHMGET, key, uintptr(int(structSize)), 0o666)
if err != 0 {
return fmt.Errorf("get shared memory failed:%w", err)
}
shmAddr, _, err := unix.Syscall(unix.SYS_SHMAT, shmID, 0, 0)
if err != 0 {
return fmt.Errorf("attach shared memory failed:%w", err)
}
// 读取共享内存中的数据
componentInfo = (*orm.Component)(unsafe.Pointer(shmAddr + structSize))
// Detach shared memory
unix.Syscall(unix.SYS_SHMDT, shmAddr, 0, 0)
return nil
}
func WriteComponentInShareMemory(key uintptr, componentInfo *orm.Component) error {
structSize := unsafe.Sizeof(orm.Component{})
shmID, _, err := unix.Syscall(unix.SYS_SHMGET, key, uintptr(int(structSize)), 0o666)
if err != 0 {
return fmt.Errorf("get shared memory failed:%w", err)
}
shmAddr, _, err := unix.Syscall(unix.SYS_SHMAT, shmID, 0, 0)
if err != 0 {
return fmt.Errorf("attach shared memory failed:%w", err)
}
obj := (*orm.Component)(unsafe.Pointer(shmAddr + unsafe.Sizeof(structSize)))
fmt.Println(obj)
// id integer NOT NULL DEFAULT nextval('component_id_seq'::regclass),
// global_uuid uuid NOT NULL DEFAULT gen_random_uuid(),
// nspath character varying(32) COLLATE pg_catalog."default",
// tag character varying(32) COLLATE pg_catalog."default" NOT NULL,
// name character varying(64) COLLATE pg_catalog."default" NOT NULL,
// description character varying(512) COLLATE pg_catalog."default" NOT NULL DEFAULT ''::character varying,
// grid character varying(64) COLLATE pg_catalog."default" NOT NULL,
// zone character varying(64) COLLATE pg_catalog."default" NOT NULL,
// station character varying(64) COLLATE pg_catalog."default" NOT NULL,
// type integer NOT NULL,
// in_service boolean DEFAULT false,
// state integer NOT NULL DEFAULT 0,
// connected_bus jsonb NOT NULL DEFAULT '{}'::jsonb,
// label jsonb NOT NULL DEFAULT '{}'::jsonb,
// context jsonb NOT NULL DEFAULT '{}'::jsonb,
// page_id integer NOT NULL,
// op integer NOT NULL DEFAULT '-1'::integer,
// ts timestamp with time zone NOT NULL DEFAULT CURRENT_TIMESTAMP,
unix.Syscall(unix.SYS_SHMDT, shmAddr, 0, 0)
return nil
}
// DeleteShareMemory defines a function to delete shared memory
func DeleteShareMemory(key uintptr) error {
_, _, err := unix.Syscall(unix.SYS_SHM_UNLINK, key, 0, 0o666)
if err != 0 {
return fmt.Errorf("get shared memory failed:%w", err)
}
return nil
}

57
sql/async_task.sql Normal file
View File

@ -0,0 +1,57 @@
-- Async task table schema migration
-- Add new columns for enhanced task tracking and retry functionality
-- Add new columns to async_task table
ALTER TABLE async_task ADD COLUMN IF NOT EXISTS started_at bigint NULL;
ALTER TABLE async_task ADD COLUMN IF NOT EXISTS execution_time bigint NULL;
ALTER TABLE async_task ADD COLUMN IF NOT EXISTS retry_count integer NOT NULL DEFAULT 0;
ALTER TABLE async_task ADD COLUMN IF NOT EXISTS max_retry_count integer NOT NULL DEFAULT 3;
ALTER TABLE async_task ADD COLUMN IF NOT EXISTS next_retry_time bigint NULL;
ALTER TABLE async_task ADD COLUMN IF NOT EXISTS retry_delay integer NOT NULL DEFAULT 5000;
ALTER TABLE async_task ADD COLUMN IF NOT EXISTS priority integer NOT NULL DEFAULT 5;
ALTER TABLE async_task ADD COLUMN IF NOT EXISTS queue_name varchar(100) NOT NULL DEFAULT 'default';
ALTER TABLE async_task ADD COLUMN IF NOT EXISTS worker_id varchar(50) NULL;
ALTER TABLE async_task ADD COLUMN IF NOT EXISTS failure_reason text NULL;
ALTER TABLE async_task ADD COLUMN IF NOT EXISTS stack_trace text NULL;
ALTER TABLE async_task ADD COLUMN IF NOT EXISTS created_by varchar(100) NULL;
-- Add new columns to async_task_result table
ALTER TABLE async_task_result ADD COLUMN IF NOT EXISTS execution_time bigint NOT NULL DEFAULT 0;
ALTER TABLE async_task_result ADD COLUMN IF NOT EXISTS memory_usage bigint NULL;
ALTER TABLE async_task_result ADD COLUMN IF NOT EXISTS cpu_usage double precision NULL;
ALTER TABLE async_task_result ADD COLUMN IF NOT EXISTS retry_count integer NOT NULL DEFAULT 0;
ALTER TABLE async_task_result ADD COLUMN IF NOT EXISTS completed_at bigint NOT NULL DEFAULT 0;
-- Add indexes for improved query performance
CREATE INDEX IF NOT EXISTS idx_async_task_status_priority ON async_task(status, priority DESC);
CREATE INDEX IF NOT EXISTS idx_async_task_next_retry_time ON async_task(next_retry_time) WHERE status = 'FAILED';
CREATE INDEX IF NOT EXISTS idx_async_task_created_by ON async_task(created_by);
CREATE INDEX IF NOT EXISTS idx_async_task_task_type ON async_task(task_type);
CREATE INDEX IF NOT EXISTS idx_async_task_started_at ON async_task(started_at) WHERE started_at IS NOT NULL;
-- Update existing rows to have default values for new columns
UPDATE async_task SET priority = 5 WHERE priority IS NULL;
UPDATE async_task SET queue_name = 'default' WHERE queue_name IS NULL;
UPDATE async_task SET retry_count = 0 WHERE retry_count IS NULL;
UPDATE async_task SET max_retry_count = 3 WHERE max_retry_count IS NULL;
UPDATE async_task SET retry_delay = 5000 WHERE retry_delay IS NULL;
-- Add comments for new columns
COMMENT ON COLUMN async_task.started_at IS 'Timestamp when task execution started (Unix epoch seconds)';
COMMENT ON COLUMN async_task.execution_time IS 'Task execution time in milliseconds';
COMMENT ON COLUMN async_task.retry_count IS 'Number of retry attempts for failed tasks';
COMMENT ON COLUMN async_task.max_retry_count IS 'Maximum number of retry attempts allowed';
COMMENT ON COLUMN async_task.next_retry_time IS 'Next retry timestamp (Unix epoch seconds)';
COMMENT ON COLUMN async_task.retry_delay IS 'Delay between retries in milliseconds';
COMMENT ON COLUMN async_task.priority IS 'Task priority (1-10, higher is more important)';
COMMENT ON COLUMN async_task.queue_name IS 'Name of the queue the task belongs to';
COMMENT ON COLUMN async_task.worker_id IS 'ID of the worker processing the task';
COMMENT ON COLUMN async_task.failure_reason IS 'Reason for task failure';
COMMENT ON COLUMN async_task.stack_trace IS 'Stack trace for debugging failed tasks';
COMMENT ON COLUMN async_task.created_by IS 'User or system that created the task';
COMMENT ON COLUMN async_task_result.execution_time IS 'Total execution time in milliseconds';
COMMENT ON COLUMN async_task_result.memory_usage IS 'Memory usage in bytes';
COMMENT ON COLUMN async_task_result.cpu_usage IS 'CPU usage percentage';
COMMENT ON COLUMN async_task_result.retry_count IS 'Number of retries before success';
COMMENT ON COLUMN async_task_result.completed_at IS 'Timestamp when task completed (Unix epoch seconds)';

430
task/handler_factory.go Normal file
View File

@ -0,0 +1,430 @@
// Package task provides asynchronous task processing with handler factory pattern
package task
import (
"context"
"fmt"
"sync"
"time"
"modelRT/database"
"modelRT/logger"
"modelRT/orm"
"github.com/gofrs/uuid"
"gorm.io/gorm"
)
// TaskHandler defines the interface for task processors
type TaskHandler interface {
// Execute processes a task with the given ID, type, and params from the MQ message
Execute(ctx context.Context, taskID uuid.UUID, taskType TaskType, params map[string]any, db *gorm.DB) error
// CanHandle returns true if this handler can process the given task type
CanHandle(taskType TaskType) bool
// Name returns the name of the handler for logging and metrics
Name() string
}
// HandlerFactory creates task handlers based on task type
type HandlerFactory struct {
handlers map[TaskType]TaskHandler
mu sync.RWMutex
}
// NewHandlerFactory creates a new HandlerFactory
func NewHandlerFactory() *HandlerFactory {
return &HandlerFactory{
handlers: make(map[TaskType]TaskHandler),
}
}
// RegisterHandler registers a handler for a specific task type
func (f *HandlerFactory) RegisterHandler(ctx context.Context, taskType TaskType, handler TaskHandler) {
f.mu.Lock()
defer f.mu.Unlock()
f.handlers[taskType] = handler
logger.Info(ctx, "Handler registered",
"task_type", taskType,
"handler_name", handler.Name(),
)
}
// GetHandler returns a handler for the given task type
func (f *HandlerFactory) GetHandler(taskType TaskType) (TaskHandler, error) {
f.mu.RLock()
handler, exists := f.handlers[taskType]
f.mu.RUnlock()
if !exists {
return nil, fmt.Errorf("no handler registered for task type: %s", taskType)
}
return handler, nil
}
// CreateDefaultHandlers registers all default task handlers
func (f *HandlerFactory) CreateDefaultHandlers(ctx context.Context) {
f.RegisterHandler(ctx, TypeTopologyAnalysis, &TopologyAnalysisHandler{})
f.RegisterHandler(ctx, TypeEventAnalysis, &EventAnalysisHandler{})
f.RegisterHandler(ctx, TypeBatchImport, &BatchImportHandler{})
f.RegisterHandler(ctx, TaskType(TaskTypeTest), NewTestTaskHandler())
}
// BaseHandler provides common functionality for all task handlers
type BaseHandler struct {
name string
}
// NewBaseHandler creates a new BaseHandler
func NewBaseHandler(name string) *BaseHandler {
return &BaseHandler{name: name}
}
// Name returns the handler name
func (h *BaseHandler) Name() string {
return h.name
}
// TopologyAnalysisHandler handles topology analysis tasks
type TopologyAnalysisHandler struct {
BaseHandler
}
// NewTopologyAnalysisHandler creates a new TopologyAnalysisHandler
func NewTopologyAnalysisHandler() *TopologyAnalysisHandler {
return &TopologyAnalysisHandler{
BaseHandler: *NewBaseHandler("topology_analysis_handler"),
}
}
// Execute processes a topology analysis task.
// Params (all sourced from the MQ message, no DB lookup needed):
// - start_component_uuid (string, required): BFS origin
// - end_component_uuid (string, required): reachability target
// - check_in_service (bool, optional, default true): skip out-of-service components
func (h *TopologyAnalysisHandler) Execute(ctx context.Context, taskID uuid.UUID, taskType TaskType, params map[string]any, db *gorm.DB) error {
logger.Info(ctx, "topology analysis started", "task_id", taskID)
// Phase 1: parse params from MQ message
startComponentUUID, endComponentUUID, checkInService, err := parseTopologyAnalysisParams(params)
if err != nil {
return fmt.Errorf("invalid topology analysis params: %w", err)
}
logger.Info(ctx, "topology params parsed",
"task_id", taskID,
"start", startComponentUUID,
"end", endComponentUUID,
"check_in_service", checkInService,
)
if err := database.UpdateAsyncTaskProgress(ctx, db, taskID, 20); err != nil {
logger.Warn(ctx, "update progress failed", "task_id", taskID, "progress", 20, "error", err)
}
// Phase 2: query topology edges from startComponentUUID, build adjacency list
topoEdges, err := database.QueryTopologicByStartUUID(ctx, db, startComponentUUID)
if err != nil {
return fmt.Errorf("query topology from start node: %w", err)
}
// adjacency list: uuid_from → []uuid_to
adjMap := make(map[uuid.UUID][]uuid.UUID, len(topoEdges))
// collect all UUIDs for batch InService query
allUUIDs := make(map[uuid.UUID]struct{}, len(topoEdges)*2)
allUUIDs[startComponentUUID] = struct{}{}
for _, edge := range topoEdges {
adjMap[edge.UUIDFrom] = append(adjMap[edge.UUIDFrom], edge.UUIDTo)
allUUIDs[edge.UUIDFrom] = struct{}{}
allUUIDs[edge.UUIDTo] = struct{}{}
}
if err := database.UpdateAsyncTaskProgress(ctx, db, taskID, 40); err != nil {
logger.Warn(ctx, "update progress failed", "task_id", taskID, "progress", 40, "error", err)
}
// Phase 3: batch-load InService status (only when checkInService is true)
inServiceMap := make(map[uuid.UUID]bool)
if checkInService {
uuidSlice := make([]uuid.UUID, 0, len(allUUIDs))
for id := range allUUIDs {
uuidSlice = append(uuidSlice, id)
}
inServiceMap, err = database.QueryComponentsInServiceByUUIDs(ctx, db, uuidSlice)
if err != nil {
return fmt.Errorf("query component in_service status: %w", err)
}
// check the start node itself before BFS
if !inServiceMap[startComponentUUID] {
return persistTopologyResult(ctx, db, taskID, startComponentUUID, endComponentUUID,
checkInService, false, nil, &startComponentUUID)
}
}
if err := database.UpdateAsyncTaskProgress(ctx, db, taskID, 60); err != nil {
logger.Warn(ctx, "update progress failed", "task_id", taskID, "progress", 60, "error", err)
}
// Phase 4: BFS reachability check
visited := make(map[uuid.UUID]struct{})
parent := make(map[uuid.UUID]uuid.UUID) // for path reconstruction
queue := []uuid.UUID{startComponentUUID}
visited[startComponentUUID] = struct{}{}
isReachable := false
var blockedBy *uuid.UUID
for len(queue) > 0 {
cur := queue[0]
queue = queue[1:]
if cur == endComponentUUID {
isReachable = true
break
}
for _, next := range adjMap[cur] {
if _, seen := visited[next]; seen {
continue
}
if checkInService && !inServiceMap[next] {
// record first out-of-service blocker but keep searching other branches
if blockedBy == nil {
id := next
blockedBy = &id
}
continue
}
visited[next] = struct{}{}
parent[next] = cur
queue = append(queue, next)
}
}
if err := database.UpdateAsyncTaskProgress(ctx, db, taskID, 80); err != nil {
logger.Warn(ctx, "update progress failed", "task_id", taskID, "progress", 80, "error", err)
}
// Phase 5: reconstruct path (if reachable) and persist result
var path []uuid.UUID
if isReachable {
blockedBy = nil // reachable path found — clear any partial blocker
path = reconstructPath(parent, startComponentUUID, endComponentUUID)
}
return persistTopologyResult(ctx, db, taskID, startComponentUUID, endComponentUUID,
checkInService, isReachable, path, blockedBy)
}
// parseTopologyAnalysisParams extracts and validates the three required fields.
// check_in_service defaults to true when absent.
func parseTopologyAnalysisParams(params map[string]any) (startID, endID uuid.UUID, checkInService bool, err error) {
startStr, ok := params["start_component_uuid"].(string)
if !ok || startStr == "" {
err = fmt.Errorf("missing or invalid start_component_uuid")
return
}
endStr, ok := params["end_component_uuid"].(string)
if !ok || endStr == "" {
err = fmt.Errorf("missing or invalid end_component_uuid")
return
}
startID, err = uuid.FromString(startStr)
if err != nil {
err = fmt.Errorf("parse start_component_uuid %q: %w", startStr, err)
return
}
endID, err = uuid.FromString(endStr)
if err != nil {
err = fmt.Errorf("parse end_component_uuid %q: %w", endStr, err)
return
}
// check_in_service defaults to true
checkInService = true
if v, exists := params["check_in_service"]; exists {
if b, isBool := v.(bool); isBool {
checkInService = b
}
}
return
}
// reconstructPath walks the parent map backwards from end to start.
func reconstructPath(parent map[uuid.UUID]uuid.UUID, start, end uuid.UUID) []uuid.UUID {
var path []uuid.UUID
for cur := end; cur != start; cur = parent[cur] {
path = append(path, cur)
}
path = append(path, start)
// reverse: path was built end→start
for i, j := 0, len(path)-1; i < j; i, j = i+1, j-1 {
path[i], path[j] = path[j], path[i]
}
return path
}
// persistTopologyResult serialises the analysis outcome and writes it to async_task_result.
func persistTopologyResult(
ctx context.Context, db *gorm.DB, taskID uuid.UUID,
startID, endID uuid.UUID, checkInService, isReachable bool,
path []uuid.UUID, blockedBy *uuid.UUID,
) error {
pathStrs := make([]string, 0, len(path))
for _, id := range path {
pathStrs = append(pathStrs, id.String())
}
result := orm.JSONMap{
"start_component_uuid": startID.String(),
"end_component_uuid": endID.String(),
"check_in_service": checkInService,
"is_reachable": isReachable,
"path": pathStrs,
"computed_at": time.Now().Unix(),
}
if blockedBy != nil {
result["blocked_by"] = blockedBy.String()
}
if err := database.CreateAsyncTaskResult(ctx, db, taskID, result); err != nil {
return fmt.Errorf("save task result: %w", err)
}
logger.Info(ctx, "topology analysis completed",
"task_id", taskID,
"is_reachable", isReachable,
"path_length", len(path),
)
return nil
}
// CanHandle returns true for topology analysis tasks
func (h *TopologyAnalysisHandler) CanHandle(taskType TaskType) bool {
return taskType == TypeTopologyAnalysis
}
// EventAnalysisHandler handles event analysis tasks
type EventAnalysisHandler struct {
BaseHandler
}
// NewEventAnalysisHandler creates a new EventAnalysisHandler
func NewEventAnalysisHandler() *EventAnalysisHandler {
return &EventAnalysisHandler{
BaseHandler: *NewBaseHandler("event_analysis_handler"),
}
}
// Execute processes an event analysis task
func (h *EventAnalysisHandler) Execute(ctx context.Context, taskID uuid.UUID, taskType TaskType, params map[string]any, db *gorm.DB) error {
logger.Info(ctx, "Starting event analysis",
"task_id", taskID,
"task_type", taskType,
)
// TODO: Implement actual event analysis logic
// This would typically involve:
// 1. Fetching motor and trigger information
// 2. Analyzing events within the specified duration
// 3. Generating analysis report
// 4. Storing results in database
// Simulate work
logger.Info(ctx, "Event analysis completed",
"task_id", taskID,
"task_type", taskType,
)
return nil
}
// CanHandle returns true for event analysis tasks
func (h *EventAnalysisHandler) CanHandle(taskType TaskType) bool {
return taskType == TypeEventAnalysis
}
// BatchImportHandler handles batch import tasks
type BatchImportHandler struct {
BaseHandler
}
// NewBatchImportHandler creates a new BatchImportHandler
func NewBatchImportHandler() *BatchImportHandler {
return &BatchImportHandler{
BaseHandler: *NewBaseHandler("batch_import_handler"),
}
}
// Execute processes a batch import task
func (h *BatchImportHandler) Execute(ctx context.Context, taskID uuid.UUID, taskType TaskType, params map[string]any, db *gorm.DB) error {
logger.Info(ctx, "Starting batch import",
"task_id", taskID,
"task_type", taskType,
)
// TODO: Implement actual batch import logic
// This would typically involve:
// 1. Reading file from specified path
// 2. Parsing file content (CSV, Excel, etc.)
// 3. Validating and importing data into database
// 4. Generating import report
// Simulate work
logger.Info(ctx, "Batch import completed",
"task_id", taskID,
"task_type", taskType,
)
return nil
}
// CanHandle returns true for batch import tasks
func (h *BatchImportHandler) CanHandle(taskType TaskType) bool {
return taskType == TypeBatchImport
}
// CompositeHandler can handle multiple task types by delegating to appropriate handlers
type CompositeHandler struct {
factory *HandlerFactory
}
// NewCompositeHandler creates a new CompositeHandler
func NewCompositeHandler(factory *HandlerFactory) *CompositeHandler {
return &CompositeHandler{factory: factory}
}
// Execute delegates task execution to the appropriate handler
func (h *CompositeHandler) Execute(ctx context.Context, taskID uuid.UUID, taskType TaskType, params map[string]any, db *gorm.DB) error {
handler, err := h.factory.GetHandler(taskType)
if err != nil {
return fmt.Errorf("failed to get handler for task type %s: %w", taskType, err)
}
return handler.Execute(ctx, taskID, taskType, params, db)
}
// CanHandle returns true if any registered handler can handle the task type
func (h *CompositeHandler) CanHandle(taskType TaskType) bool {
_, err := h.factory.GetHandler(taskType)
return err == nil
}
// Name returns the composite handler name
func (h *CompositeHandler) Name() string {
return "composite_handler"
}
// DefaultHandlerFactory returns a HandlerFactory with all default handlers registered
func DefaultHandlerFactory(ctx context.Context) *HandlerFactory {
factory := NewHandlerFactory()
factory.CreateDefaultHandlers(ctx)
return factory
}
// DefaultCompositeHandler returns a CompositeHandler with all default handlers
func DefaultCompositeHandler(ctx context.Context) TaskHandler {
factory := DefaultHandlerFactory(ctx)
return NewCompositeHandler(factory)
}

Some files were not shown because too many files have changed in this diff Show More