69 Commits

Author SHA1 Message Date
a095444222 fixed stuck in loop
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m8s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m26s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m47s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m16s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-09 11:14:13 +02:00
5e7bc3df54 added debugging
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m12s
Rust Cross-Platform Build / Set Tag Name (push) Has been cancelled
Rust Cross-Platform Build / Build and Push Docker Image (push) Has been cancelled
Rust Cross-Platform Build / Create Tag (push) Has been cancelled
Rust Cross-Platform Build / Workflow Summary (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Has been cancelled
2025-10-09 11:10:53 +02:00
1c7a169956 added container broadcasting
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m14s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m18s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 4m4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m19s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-09 10:59:21 +02:00
c7bce926e9 added docker registration dto 2025-10-09 10:39:52 +02:00
711083daa0 fixed server message handle 2025-10-06 13:01:03 +02:00
06cec6ff9f added Option to data structs 2025-10-06 12:43:15 +02:00
a7cae5e93f added docker metrics
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m3s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m56s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m37s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m12s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-05 22:43:48 +02:00
66428863e6 moved stats into own folder
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Failing after 1m9s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Has been skipped
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Has been skipped
Rust Cross-Platform Build / Set Tag Name (push) Has been skipped
Rust Cross-Platform Build / Build and Push Docker Image (push) Has been skipped
Rust Cross-Platform Build / Create Tag (push) Has been skipped
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
2025-10-05 15:21:24 +02:00
b35cac0dbe changed Dtos and Docker structs 2025-10-04 22:05:28 +02:00
bb55b46c34 fixed image version detectionr
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m4s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m49s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m35s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m8s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-04 20:38:20 +02:00
76d54cb433 fixed image detection in client container
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m8s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m0s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m43s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m8s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-04 20:25:35 +02:00
4dc8c56a5c moved eprintln to println 2025-10-04 17:11:40 +02:00
c81040b16b moved github to gitea
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 3s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m2s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m45s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m36s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m2s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-04 13:22:14 +02:00
68c307e258 added debug print
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m6s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m52s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m40s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m13s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-04 11:57:06 +02:00
5d00f072d8 fixed syntax
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m38s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m20s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 2m36s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m3s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-04 01:21:03 +02:00
9072e253ec removed else handling from set-tag
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 6s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m44s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Has been cancelled
2025-10-04 00:56:57 +02:00
063ad113d7 added staging to branches for cicd 2025-10-04 00:55:43 +02:00
097be0f555 added staging to branches for cicd 2025-10-04 00:55:36 +02:00
c7b8e24a54 excluding builds on subbranches
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m26s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Has been cancelled
2025-10-04 00:50:00 +02:00
fb1d016b36 excluding builds on subbranches
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m36s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m21s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 2m38s
2025-10-04 00:42:03 +02:00
89d58e9696 moved set tag before docker build
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m2s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m44s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m26s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 3s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 3m15s
Rust Cross-Platform Build / Create Tag (push) Failing after 6s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
2025-10-04 00:31:30 +02:00
45c6e8180c added only tagging on main dev and staging 2025-10-04 00:23:52 +02:00
c395885bbd removed pull request
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m4s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m49s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m31s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
2025-10-04 00:01:13 +02:00
75494e2a71 changed api endpoints register and hardware-infor
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m16s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m19s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m44s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 3m6s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 4s
2025-10-03 23:53:34 +02:00
8e6e291f6a changed api endpoints register and hardware-infor 2025-10-03 23:53:26 +02:00
bfeb43f38a added docker management
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m2s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m18s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m25s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m0s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-03 20:42:35 +02:00
2f5e2391f7 removed useless if statemnt 2025-10-03 14:30:41 +02:00
820952089a removed useless if statemnt 2025-10-03 14:19:48 +02:00
c745125f20 check if checks are needed or needs is enough
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m4s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m47s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m30s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m20s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-03 13:47:00 +02:00
758fa7608f added toolcache
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m10s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m1s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m47s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m8s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-03 13:17:48 +02:00
ee6b947f29 update
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m23s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m54s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m35s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m4s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 4s
2025-10-03 12:22:12 +02:00
18dd1ef528 keine verdreckte doc 2025-10-02 00:38:23 +02:00
8fa7866cc2 creating directory for doc
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m3s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m42s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 4m22s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m8s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-01 22:57:06 +02:00
238ad87119 replaced documentation post with seperate step
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m5s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m47s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 4m20s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m4s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-01 22:42:35 +02:00
d584e21fd9 replaced documentation post with seperate step 2025-10-01 22:33:00 +02:00
ac79a2e0b7 added right path for cargo doc push 2025-10-01 22:21:59 +02:00
1798c1270b added right path for cargo doc push 2025-10-01 22:21:44 +02:00
97d1019b69 added right path for cargo doc push
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m5s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Failing after 2m50s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Failing after 3m31s
Rust Cross-Platform Build / Build and Push Docker Image (push) Has been skipped
Rust Cross-Platform Build / Create Tag (push) Has been skipped
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
2025-10-01 21:49:02 +02:00
e30ceb75d9 added cargo doc push
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m6s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Failing after 3m2s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Failing after 3m39s
Rust Cross-Platform Build / Build and Push Docker Image (push) Has been skipped
Rust Cross-Platform Build / Create Tag (push) Has been skipped
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
2025-10-01 19:49:48 +02:00
4681e0c694 fixed id type
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m7s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m56s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m40s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m11s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-01 19:00:34 +02:00
49f1af392d fixed units
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m8s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m5s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m56s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m24s
Rust Cross-Platform Build / Create Tag (push) Successful in 6s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
2025-10-01 13:13:18 +02:00
f78e48900a fixed comments 2025-10-01 12:11:34 +02:00
8c49a63a50 added commentation 2025-10-01 12:07:53 +02:00
d994be757e get clientcontainer; container findable with id image name ahs to be implemented 2025-10-01 11:36:12 +02:00
d7a58e00da added listing all running containers
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m13s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m20s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 4m7s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m22s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-09-29 14:54:03 +02:00
d2efc64487 moved container functions into new mod 2025-09-28 18:54:17 +02:00
1f23c303c1 removed overloading metrics print statements
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m10s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m6s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m43s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m14s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-09-28 17:46:42 +02:00
1cc85bfa14 added debugging for docker image
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m10s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m6s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m54s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m12s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-09-28 01:00:26 +02:00
8bac357dc6 added debugging for docker image
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m10s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m9s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m54s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m12s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 4s
2025-09-28 00:36:52 +02:00
7154c01f7a added search for docker image in different locations
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m6s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m45s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m30s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m19s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-09-28 00:24:47 +02:00
813bf4e407 cleaned up disk information
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m4s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m37s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m21s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 1m55s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 4s
2025-09-27 23:48:37 +02:00
bf6f89c954 added error handling for client version print 2025-09-27 23:48:02 +02:00
dbe87fedb6 added error handling for client version print 2025-09-27 21:36:59 +02:00
67b24b33aa added inital server communcation task 2025-09-27 21:34:30 +02:00
67ebbdaa19 added image screening for current container 2025-09-26 23:36:09 +02:00
6fd275802c removed unneccessary conditions for if statement
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 6s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m10s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m43s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m32s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m2s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-09-26 13:20:26 +02:00
9018adf998 removed unused env
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m12s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m46s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m33s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m3s
Rust Cross-Platform Build / Create Tag (push) Has been skipped
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
2025-09-26 11:40:02 +02:00
3124697f10 removed unused env 2025-09-26 11:39:05 +02:00
30382fedef changed secrets to AUTOMATION_*
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Has been cancelled
Rust Cross-Platform Build / Run Tests (push) Has been cancelled
Rust Cross-Platform Build / Set Tag Name (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Has been cancelled
Rust Cross-Platform Build / Build and Push Docker Image (push) Has been cancelled
Rust Cross-Platform Build / Create Tag (push) Has been cancelled
Rust Cross-Platform Build / Workflow Summary (push) Has been cancelled
2025-09-26 11:16:38 +02:00
8910155524 removed setup_rust
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Has been cancelled
Rust Cross-Platform Build / Run Tests (push) Has been cancelled
Rust Cross-Platform Build / Set Tag Name (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Has been cancelled
Rust Cross-Platform Build / Build and Push Docker Image (push) Has been cancelled
Rust Cross-Platform Build / Create Tag (push) Has been cancelled
Rust Cross-Platform Build / Workflow Summary (push) Has been cancelled
2025-09-26 11:15:30 +02:00
7a68df41ac removed redundant and unvalid conditions
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Has been cancelled
Rust Cross-Platform Build / Setup Rust Toolchain (push) Has been cancelled
Rust Cross-Platform Build / Run Tests (push) Has been cancelled
Rust Cross-Platform Build / Set Tag Name (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Has been cancelled
Rust Cross-Platform Build / Build and Push Docker Image (push) Has been cancelled
Rust Cross-Platform Build / Create Tag (push) Has been cancelled
Rust Cross-Platform Build / Workflow Summary (push) Has been cancelled
2025-09-26 02:06:25 +02:00
60ce51cd82 depends on docker-build
Some checks failed
Rust Cross-Platform Build / Create Tag (push) Blocked by required conditions
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Setup Rust Toolchain (push) Successful in 23s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m0s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 1m26s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 2m6s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 1m50s
Rust Cross-Platform Build / Workflow Summary (push) Has been cancelled
2025-09-26 01:53:41 +02:00
54fca8b1d3 added docker secrets
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Setup Rust Toolchain (push) Successful in 23s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m4s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m37s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m16s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 1m54s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Failing after 12m40s
2025-09-26 00:15:29 +02:00
aa876d9e5d added newer action version for login
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Setup Rust Toolchain (push) Successful in 23s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m6s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m35s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m25s
Rust Cross-Platform Build / Create Tag (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Failing after 2m11s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
2025-09-26 00:00:24 +02:00
88625ff986 added docker api for restart and client updat
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Setup Rust Toolchain (push) Successful in 23s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m0s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m25s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
Rust Cross-Platform Build / Build and Push Docker Image (push) Failing after 12s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
2025-09-25 22:02:11 +02:00
428be53fff added docker api for restart and client updat 2025-09-25 22:02:00 +02:00
83cb815e76 added docker image update option 2025-09-23 23:27:13 +02:00
755617c86f removed loggin option for client 2025-09-23 21:28:19 +02:00
314bf8c327 added server comm 2025-09-21 01:40:32 +02:00
20 changed files with 1887 additions and 218 deletions

View File

@@ -3,17 +3,12 @@ name: Rust Cross-Platform Build
on:
workflow_dispatch:
push:
branches: [ "development", "main", "feature/*", "bugfix/*", "enhancement/*" ]
branches: [ "development", "main", "staging" ]
tags: [ "v*.*.*" ]
pull_request:
branches: [ "development", "main" ]
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1
REGISTRY: git.triggermeelmo.com
IMAGE_NAME: donpat1to/watcher-agent
TAG: ${{ github.ref == 'refs/heads/main' && 'latest' || github.ref == 'refs/heads/development' && 'development' || github.ref_type == 'tag' && github.ref_name || 'pr' }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -54,24 +49,9 @@ jobs:
exit 1
fi
setup-rust:
name: Setup Rust Toolchain
needs: detect-project
if: ${{ !failure() && !cancelled() }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
with:
toolchain: stable
targets: x86_64-unknown-linux-gnu, x86_64-pc-windows-gnu
components: rustfmt, clippy
test:
name: Run Tests
needs: [detect-project, setup-rust]
needs: [detect-project]
if: ${{ !failure() && !cancelled() }}
runs-on: ubuntu-latest
steps:
@@ -94,47 +74,9 @@ jobs:
# working-directory: ${{ needs.detect-project.outputs.project-dir }}
# run: cargo clippy -- -D warnings
set-tag:
name: Set Tag Name
runs-on: ubuntu-latest
outputs:
tag_name: ${{ steps.set_tag.outputs.tag_name }}
steps:
- uses: actions/checkout@v4
- name: Determine next semantic version tag
id: set_tag
run: |
git fetch --tags
# Find latest tag matching vX.Y.Z
latest_tag=$(git tag --list 'v*.*.*' --sort=-v:refname | head -n 1)
if [[ -z "$latest_tag" ]]; then
major=0
minor=0
patch=0
else
version="${latest_tag#v}"
IFS='.' read -r major minor patch <<< "$version"
fi
if [[ "${GITHUB_REF}" == "refs/heads/main" ]]; then
major=$((major + 1))
minor=0
patch=0
elif [[ "${GITHUB_REF}" == "refs/heads/development" ]]; then
minor=$((minor + 1))
patch=0
else
patch=$((patch + 1))
fi
new_tag="v${major}.${minor}.${patch}"
echo "tag_name=${new_tag}" >> $GITHUB_OUTPUT
# audit:
# name: Security Audit
# needs: [detect-project, setup-rust]
# needs: [detect-project]
# if: ${{ !failure() && !cancelled() }}
# runs-on: ubuntu-latest
# steps:
@@ -154,7 +96,7 @@ jobs:
build:
name: Build (${{ matrix.target }})
needs: [detect-project, setup-rust, test, audit]
needs: [detect-project, test]
if: ${{ !failure() && !cancelled() }}
runs-on: ubuntu-latest
strategy:
@@ -207,13 +149,65 @@ jobs:
path: |
${{ needs.detect-project.outputs.project-dir }}/target/${{ matrix.target }}/release/${{ needs.detect-project.outputs.project-name }}${{ matrix.os == 'windows' && '.exe' || '' }}
set-tag:
name: Set Tag Name
needs: [detect-project, build]
#if: ${{ !failure() && !cancelled() && github.event_name != 'pull_request' }}
runs-on: ubuntu-latest
outputs:
tag_name: ${{ steps.set_tag.outputs.tag_name }}
should_tag: ${{ steps.set_tag.outputs.should_tag }}
steps:
- uses: actions/checkout@v4
- name: Determine next semantic version tag
id: set_tag
run: |
git fetch --tags
# Find latest tag matching vX.Y.Z
latest_tag=$(git tag --list 'v*.*.*' --sort=-v:refname | head -n 1)
if [[ -z "$latest_tag" ]]; then
major=0
minor=0
patch=0
else
version="${latest_tag#v}"
IFS='.' read -r major minor patch <<< "$version"
fi
if [[ "${GITHUB_REF}" == "refs/heads/main" ]]; then
major=$((major + 1))
minor=0
patch=0
new_tag="v${major}.${minor}.${patch}"
echo "tag_name=${new_tag}" >> $GITHUB_OUTPUT
echo "should_tag=true" >> $GITHUB_OUTPUT
echo "Creating new major version tag: ${new_tag}"
elif [[ "${GITHUB_REF}" == "refs/heads/development" ]]; then
minor=$((minor + 1))
patch=0
new_tag="v${major}.${minor}.${patch}"
echo "tag_name=${new_tag}" >> $GITHUB_OUTPUT
echo "should_tag=true" >> $GITHUB_OUTPUT
echo "Creating new minor version tag: ${new_tag}"
elif [[ "${GITHUB_REF}" == "refs/heads/staging" ]]; then
patch=$((patch + 1))
new_tag="v${major}.${minor}.${patch}"
echo "tag_name=${new_tag}" >> $GITHUB_OUTPUT
echo "should_tag=true" >> $GITHUB_OUTPUT
echo "Creating new patch version tag: ${new_tag}"
fi
docker-build:
name: Build and Push Docker Image
needs: [detect-project, build, set-tag]
if: |
always() &&
needs.detect-project.result == 'success' &&
needs.build.result == 'success' &&
needs.set-tag.outputs.should_tag == 'true' &&
github.event_name != 'pull_request'
runs-on: ubuntu-latest
environment: production
@@ -227,14 +221,14 @@ jobs:
path: dist/
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
- name: Login to Docker Registry
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
username: ${{ secrets.AUTOMATION_USERNAME }}
password: ${{ secrets.AUTOMATION_PASSWORD }}
- name: Build Docker image
uses: docker/build-push-action@v4
@@ -250,8 +244,7 @@ jobs:
tag:
name: Create Tag
needs: [build, set-tag]
if: github.event_name == 'push'
needs: [docker-build, build, set-tag]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
@@ -267,13 +260,13 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
echo "Creating new tag: ${{ needs.set-tag.outputs.tag_name }}"
git tag ${{ needs.set-tag.outputs.tag_name }}
git push origin ${{ needs.set-tag.outputs.tag_name }}
summary:
name: Workflow Summary
needs: [test, audit, build, docker-build]
needs: [test, build, docker-build]
if: always()
runs-on: ubuntu-latest
steps:

View File

@@ -19,9 +19,11 @@ nvml-wrapper = "0.11"
nvml-wrapper-sys = "0.9.0"
anyhow = "1.0.98"
# Docker .env loading
config = "0.13"
dotenvy = "0.15"
regex = "1.11.3"
# Docker API access
bollard = "0.19"
futures-util = "0.3"
[target.'cfg(windows)'.dependencies]
winapi = { version = "0.3", features = ["winuser", "pdh", "ifmib", "iphlpapi", "winerror" ,"wbemcli", "combaseapi"] }

View File

@@ -1,15 +1,47 @@
/// # API Module
///
/// This module provides all HTTP communication between WatcherAgent and the backend server.
///
/// ## Responsibilities
/// - **Registration:** Registers the agent with the backend and retrieves its server ID and IP address.
/// - **Heartbeat:** Periodically sends heartbeat signals to indicate liveness.
/// - **Metrics Reporting:** Sends collected hardware and network metrics to the backend.
/// - **Command Listening:** Polls for and executes remote commands from the backend (e.g., update image, restart container).
///
/// ## Usage
/// These functions are called from the main agent loop and background tasks. All network operations are asynchronous and robust to transient failures.
use std::time::Duration;
use crate::docker::container;
use crate::docker::serverclientcomm::handle_server_message;
use crate::hardware::HardwareInfo;
use crate::models::{HeartbeatDto, IdResponse, MetricDto, RegistrationDto};
use crate::models::{
Acknowledgment, DockerMetricDto, DockerRegistrationDto, HeartbeatDto, IdResponse, MetricDto,
RegistrationDto, ServerMessage,
};
use anyhow::Result;
use reqwest::{Client, StatusCode};
use std::error::Error;
use tokio::time::sleep;
use bollard::Docker;
/// Registers this agent with the backend server and retrieves its server ID and IP address.
///
/// This function collects local hardware information, prepares a registration payload, and sends it to the backend. It will retry registration until successful, handling network errors and server-side failures gracefully.
///
/// # Arguments
/// * `base_url` - The base URL of the backend server (e.g., `https://server.example.com`).
///
/// # Returns
/// * `Result<(i32, String), Box<dyn Error + Send + Sync>>` - Tuple of server ID and registered IP address on success.
///
/// # Errors
/// Returns an error if unable to register after repeated attempts.
pub async fn register_with_server(
base_url: &str,
) -> Result<(i32, String), Box<dyn Error + Send + Sync>> {
) -> Result<(u16, String), Box<dyn Error + Send + Sync>> {
// First get local IP
let ip = local_ip_address::local_ip()?.to_string();
println!("Local IP address detected: {}", ip);
@@ -27,18 +59,18 @@ pub async fn register_with_server(
// Prepare registration data
let registration = RegistrationDto {
id: server_id,
server_id: server_id,
ip_address: registered_ip.clone(),
cpu_type: hardware.cpu.name.clone().unwrap_or_default(),
cpu_cores: (hardware.cpu.cores).unwrap_or_default(),
gpu_type: hardware.gpu.name.clone().unwrap_or_default(),
ram_size: hardware.memory.total.unwrap_or_default(),
ram_size: hardware.memory.total_size.unwrap_or_default(),
};
// Try to register (will retry on failure)
loop {
println!("Attempting to register with server...");
let url = format!("{}/monitoring/register-agent-by-id", base_url);
let url = format!("{}/monitoring/hardware-info", base_url);
match client.post(&url).json(&registration).send().await {
Ok(resp) if resp.status().is_success() => {
println!("✅ Successfully registered with server.");
@@ -60,15 +92,25 @@ pub async fn register_with_server(
}
}
/// Looks up the server ID for the given IP address from the backend server.
///
/// This function will retry until a valid response is received, handling network errors and server-side failures.
///
/// # Arguments
/// * `base_url` - The base URL of the backend server.
/// * `ip` - The local IP address to look up.
///
/// # Returns
/// * `Result<(i32, String), Box<dyn Error + Send + Sync>>` - Tuple of server ID and registered IP address.
async fn get_server_id_by_ip(
base_url: &str,
ip: &str,
) -> Result<(i32, String), Box<dyn Error + Send + Sync>> {
) -> Result<(u16, String), Box<dyn Error + Send + Sync>> {
let client = Client::builder()
.danger_accept_invalid_certs(true)
.build()?;
let url = format!("{}/monitoring/server-id-by-ip?ipAddress={}", base_url, ip);
let url = format!("{}/monitoring/register?ipAddress={}", base_url, ip);
loop {
println!("Attempting to fetch server ID for IP {}...", ip);
@@ -111,6 +153,60 @@ async fn get_server_id_by_ip(
}
}
pub async fn broadcast_docker_containers(
base_url: &str,
server_id: u16,
container_dto: &mut DockerRegistrationDto,
) -> Result<(), Box<dyn Error + Send + Sync>> {
// First get local IP
println!("Preparing to broadcast docker containers...");
// Create HTTP client for registration
let client = Client::builder()
.danger_accept_invalid_certs(true)
.build()?;
// Prepare registration data
let container_dto = container_dto;
container_dto.server_id = server_id;
// Try to register (will retry on failure)
loop {
println!("Attempting to broadcast containers...");
let url = format!("{}/monitoring/service-discovery", base_url);
match client.post(&url).json(&container_dto).send().await {
Ok(resp) if resp.status().is_success() => {
println!(
"✅ Successfully broadcasted following docker container: {:?}",
container_dto
);
return Ok(());
}
Ok(resp) => {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
println!(
"⚠️ Broadcasting failed ({}): {} (will retry in 10 seconds)",
status, text
);
}
Err(err) => {
println!("⚠️ Broadcasting failed: {} (will retry in 10 seconds)", err);
}
}
sleep(Duration::from_secs(10)).await;
}
}
/// Periodically sends heartbeat signals to the backend server to indicate agent liveness.
///
/// This function runs in a background task and will retry on network errors.
///
/// # Arguments
/// * `base_url` - The base URL of the backend server.
/// * `ip` - The IP address of the agent.
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if heartbeats are sent successfully.
pub async fn heartbeat_loop(base_url: &str, ip: &str) -> Result<(), Box<dyn Error + Send + Sync>> {
let client = Client::builder()
.danger_accept_invalid_certs(true)
@@ -134,6 +230,16 @@ pub async fn heartbeat_loop(base_url: &str, ip: &str) -> Result<(), Box<dyn Erro
}
}
/// Sends collected hardware and network metrics to the backend server.
///
/// This function is called periodically from the metrics collection loop. It logs the result and retries on network errors.
///
/// # Arguments
/// * `base_url` - The base URL of the backend server.
/// * `metrics` - The metrics data to send (see [`MetricDto`]).
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if metrics are sent successfully.
pub async fn send_metrics(
base_url: &str,
metrics: &MetricDto,
@@ -153,3 +259,149 @@ pub async fn send_metrics(
Ok(())
}
/// Polls the backend server for remote commands and executes them.
///
/// This function runs in a background task, polling the server for new messages. It acknowledges receipt and execution of each command, and handles errors gracefully.
///
/// # Arguments
/// * `docker` - Reference to a Bollard Docker client.
/// * `base_url` - The base URL of the backend server.
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if commands are handled successfully.
pub async fn listening_to_server(
docker: &Docker,
base_url: &str,
) -> Result<(), Box<dyn Error + Send + Sync>> {
let url = format!("{}/api/message", base_url);
let client = reqwest::Client::new();
loop {
// Get message from server
let resp = client.get(&url).send().await;
match resp {
Ok(response) => {
if response.status().is_success() {
match response.json::<ServerMessage>().await {
Ok(msg) => {
// Acknowledge receipt immediately
if let Err(e) = send_acknowledgment(
&client,
base_url,
&msg.message_id,
"received",
"Message received successfully",
)
.await
{
eprintln!("Failed to send receipt acknowledgment: {}", e);
}
// Handle the message
let result = handle_server_message(docker, msg.clone()).await;
// Send execution result acknowledgment
let (status, details) = match result {
Ok(_) => ("success", "Message executed successfully".to_string()),
Err(e) => ("error", format!("Execution failed: {}", e)),
};
if let Err(e) = send_acknowledgment(
&client,
base_url,
&msg.message_id,
status,
&details,
)
.await
{
eprintln!("Failed to send execution acknowledgment: {}", e);
}
}
Err(e) => {
eprintln!("Failed to parse message: {}", e);
}
}
} else if response.status() == reqwest::StatusCode::NO_CONTENT {
// No new messages, continue polling
println!("No new messages from server");
} else {
eprintln!("Server returned error status: {}", response.status());
}
}
Err(e) => {
eprintln!("Failed to reach server: {}", e);
}
}
// Poll every 5 seconds (or use WebSocket for real-time)
sleep(Duration::from_secs(5)).await;
}
}
/// Sends an acknowledgment to the backend server for a received or executed command message.
///
/// This function is used internally by [`listening_to_server`] to confirm receipt and execution status of commands.
///
/// # Arguments
/// * `client` - Reference to a reqwest HTTP client.
/// * `base_url` - The base URL of the backend server.
/// * `message_id` - The ID of the message being acknowledged.
/// * `status` - Status string (e.g., "received", "success", "error").
/// * `details` - Additional details about the acknowledgment.
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if acknowledgment is sent successfully.
pub async fn send_acknowledgment(
client: &reqwest::Client,
base_url: &str,
message_id: &str,
status: &str,
details: &str,
) -> Result<(), Box<dyn Error + Send + Sync>> {
let ack_url = format!("{}/api/acknowledge", base_url);
let acknowledgment = Acknowledgment {
message_id: message_id.to_string(),
status: status.to_string(),
details: details.to_string(),
};
let response = client.post(&ack_url).json(&acknowledgment).send().await?;
if response.status().is_success() {
println!(
"Acknowledgment sent successfully for message {}",
message_id
);
} else {
eprintln!(
"Server returned error for acknowledgment: {}",
response.status()
);
}
Ok(())
}
pub async fn send_docker_metrics(
base_url: &str,
docker_metrics: &DockerMetricDto,
) -> Result<(), Box<dyn Error + Send + Sync>> {
let client = Client::new();
let url = format!("{}/monitoring/docker-metric", base_url);
println!("Docker Metrics: {:?}", docker_metrics);
match client.post(&url).json(&docker_metrics).send().await {
Ok(res) => println!(
"✅ Sent docker metrics for server {} | Status: {}",
docker_metrics.server_id,
res.status()
),
Err(err) => eprintln!("❌ Failed to send docker metrics: {}", err),
}
Ok(())
}

View File

@@ -0,0 +1,214 @@
//! Docker container utilities for WatcherAgent
//!
//! Provides functions to list and process Docker containers using the Bollard library.
//!
use crate::docker::stats;
use crate::docker::stats::{ContainerCpuInfo, ContainerNetworkInfo};
use crate::models::DockerContainer;
use bollard::query_parameters::{
CreateImageOptions, ListContainersOptions, RestartContainerOptions,
};
use bollard::Docker;
use futures_util::StreamExt;
use std::error::Error;
/// Returns a list of available Docker containers.
///
/// # Arguments
/// * `docker` - Reference to a Bollard Docker client.
///
/// # Returns
/// * `Vec<DockerContainer>` - Vector of Docker container info.
pub async fn get_available_containers(docker: &Docker) -> Vec<DockerContainer> {
println!("=== DOCKER CONTAINER LIST ===");
let options = Some(ListContainersOptions {
all: true,
..Default::default()
});
let containers_list = match docker.list_containers(options).await {
Ok(containers) => {
println!("Available containers ({}):", containers.len());
containers
.into_iter()
.filter_map(|container| {
container.id.as_ref()?; // Skip if no ID
let id = container.id?;
let short_id = if id.len() > 12 { &id[..12] } else { &id };
let name = container
.names
.and_then(|names| names.into_iter().next())
.map(|name| name.trim_start_matches('/').to_string())
.unwrap_or_else(|| "unknown".to_string());
let image = container
.image
.as_ref()
.map(|img| img.to_string())
.unwrap_or_else(|| "unknown".to_string());
Some(DockerContainer {
id: short_id.to_string(),
image: Some(image),
name: Some(name),
})
})
.collect()
}
Err(e) => {
eprintln!("Failed to list containers: {}", e);
Vec::new()
}
};
containers_list
}
/// Pulls a new Docker image and restarts the current container.
///
/// # Arguments
/// * `docker` - Reference to a Bollard Docker client.
/// * `image` - The name of the Docker image to pull.
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if updated successfully, error otherwise.
pub async fn update_docker_image(
docker: &Docker,
image: &str,
) -> Result<(), Box<dyn Error + Send + Sync>> {
println!("Updating to {}", image);
// 1. Pull new image
let mut stream = docker.create_image(
Some(CreateImageOptions {
from_image: Some(image.to_string()),
..Default::default()
}),
None,
None,
);
// Use the stream with proper trait bounds
while let Some(result) = StreamExt::next(&mut stream).await {
match result {
Ok(progress) => {
if let Some(status) = progress.status {
println!("Pull status: {}", status);
}
}
Err(e) => {
eprintln!("Error pulling image: {}", e);
break;
}
}
}
// 2. Restart the current container
let options = Some(ListContainersOptions {
all: true,
..Default::default()
});
let container_id = docker
.list_containers(options)
.await?
.into_iter()
.find_map(|c| {
c.image
.as_ref()
.and_then(|img| if img == image { c.id } else { None })
});
let _ = restart_container(docker, &container_id.unwrap()).await;
Ok(())
}
/// Restarts the agent's own Docker container.
///
/// # Arguments
/// * `docker` - Reference to a Bollard Docker client.
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if restarted successfully, error otherwise.
pub async fn restart_container(
docker: &Docker,
container_id: &str,
) -> Result<(), Box<dyn Error + Send + Sync>> {
println!("Restarting container {}", container_id);
if let Err(e) = docker
.restart_container(
&container_id.to_string(),
Some(RestartContainerOptions {
signal: None,
t: Some(0),
}),
)
.await
{
eprintln!("Failed to restart container: {}", e);
}
Ok(())
}
/*
/// Extracts a Docker container ID from a string line.
///
/// # Arguments
/// * `line` - The input string containing a container ID or related info.
///
/// # Returns
/// * `Option<String>` - The extracted container ID if found.
pub fn extract_client_container_id(line: &str) -> Option<String> {
// ...existing code...
}
*/
/// Gets network statistics for a specific container
pub async fn get_network_stats(
docker: &Docker,
container_id: &str,
) -> Result<ContainerNetworkInfo, Box<dyn Error + Send + Sync>> {
let (_, net_info, _) = stats::get_single_container_stats(docker, container_id).await?;
if let Some(net_info) = net_info {
Ok(net_info)
} else {
// Return default network info if not found
println!("No network info found for container {}", container_id);
Ok(ContainerNetworkInfo {
container_id: Some(container_id.to_string()),
rx_bytes: None,
tx_bytes: None,
rx_packets: None,
tx_packets: None,
rx_errors: None,
tx_errors: None,
})
}
}
/// Gets CPU statistics for a specific container
pub async fn get_cpu_stats(
docker: &Docker,
container_id: &str,
) -> Result<ContainerCpuInfo, Box<dyn Error + Send + Sync>> {
let (cpu_info, _, _) = stats::get_single_container_stats(docker, container_id).await?;
if let Some(cpu_info) = cpu_info {
Ok(cpu_info)
} else {
// Return default CPU info if not found
println!("No CPU info found for container {}", container_id);
Ok(ContainerCpuInfo {
container_id: Some(container_id.to_string()),
cpu_usage_percent: None,
system_cpu_usage: None,
container_cpu_usage: None,
online_cpus: None,
})
}
}

View File

@@ -0,0 +1,235 @@
/// # Docker Module
///
/// This module provides Docker integration for WatcherAgent, including container enumeration, statistics, and lifecycle management.
///
/// ## Responsibilities
/// - **Container Management:** Lists, inspects, and manages Docker containers relevant to the agent.
/// - **Statistics Aggregation:** Collects network and CPU statistics for all managed containers.
/// - **Lifecycle Operations:** Supports container restart and ID lookup for agent self-management.
///
pub mod container;
pub mod serverclientcomm;
pub mod stats;
use crate::models::{
DockerCollectMetricDto, DockerContainer, DockerContainerCpuDto, DockerContainerInfo,
DockerContainerNetworkDto, DockerContainerRamDto, DockerMetricDto, DockerRegistrationDto,
};
use bollard::Docker;
use std::error::Error;
/// Main Docker manager that holds the Docker client and provides all operations
#[derive(Debug, Clone)]
pub struct DockerManager {
pub docker: Docker,
}
impl Default for DockerManager {
fn default() -> Self {
Self {
docker: Docker::connect_with_local_defaults()
.unwrap_or_else(|e| panic!("Failed to create default Docker connection: {}", e)),
}
}
}
impl DockerManager {
/// Creates a new DockerManager instance
pub fn new() -> Result<Self, Box<dyn Error + Send + Sync>> {
let docker = Docker::connect_with_local_defaults()
.map_err(|e| format!("Failed to connect to Docker: {}", e))?;
Ok(Self { docker })
}
/// Creates a DockerManager instance with optional Docker connection
pub fn new_optional() -> Option<Self> {
Docker::connect_with_local_defaults()
.map(|docker| Self { docker })
.ok()
}
/// Finds the Docker container running the agent by image name
pub async fn get_client_container(
&self,
) -> Result<Option<DockerContainer>, Box<dyn Error + Send + Sync>> {
let containers = container::get_available_containers(&self.docker).await;
let client_image = "watcher-agent";
Ok(containers
.into_iter()
.find(|c| c.clone().image.unwrap().contains(client_image))
.map(|container| DockerContainer {
id: container.id,
image: container.image,
name: container.name,
}))
}
/// Gets the current client version (image name) if running in Docker
pub async fn get_client_version(&self) -> String {
match self.get_client_container().await {
Ok(Some(container)) => container
.image
.clone()
.unwrap()
.split(':')
.next()
.unwrap_or("unknown")
.to_string(),
Ok(None) => {
println!("Warning: No WatcherAgent container found");
"unknown".to_string()
}
Err(e) => {
println!("Warning: Could not get current image version: {}", e);
"unknown".to_string()
}
}
}
/// Checks if Docker is available and the agent is running in a container
pub async fn is_dockerized(&self) -> bool {
self.get_client_container()
.await
.map(|c| c.is_some())
.unwrap_or(false)
}
/// Gets all available containers as DTOs for registration
pub async fn get_containers(
&self,
) -> Result<Vec<DockerContainer>, Box<dyn Error + Send + Sync>> {
let containers = container::get_available_containers(&self.docker).await;
Ok(containers
.into_iter()
.map(|container| DockerContainer {
id: container.id,
image: container.image,
name: container.name,
})
.collect())
}
/// Gets the number of running containers
pub async fn get_container_count(&self) -> Result<usize, Box<dyn Error + Send + Sync>> {
let containers = container::get_available_containers(&self.docker).await;
Ok(containers.len())
}
/// Restarts a specific container by ID
pub async fn restart_container(
&self,
container_id: &str,
) -> Result<(), Box<dyn Error + Send + Sync>> {
container::restart_container(&self.docker, container_id).await
}
/// Collects Docker metrics for all containers
pub async fn collect_metrics(&self) -> Result<DockerMetricDto, Box<dyn Error + Send + Sync>> {
let containers = self.get_containers().await?;
let (cpu_stats, net_stats, mem_stats) = stats::get_container_stats(&self.docker).await?;
let container_infos_total: Vec<_> = containers
.into_iter()
.map(|container| {
let cpu = cpu_stats
.iter()
.find(|c| c.container_id == Some(container.id.clone()))
.cloned();
let network = net_stats
.iter()
.find(|n| n.container_id == Some(container.id.clone()))
.cloned();
let ram = mem_stats
.iter()
.find(|m| m.container_id == Some(container.id.clone()))
.cloned();
DockerContainerInfo {
container: Some(container),
status: None, // Status can be fetched if needed
cpu,
network,
ram,
}
})
.collect();
let container_infos: Vec<DockerCollectMetricDto> = container_infos_total
.into_iter()
.map(|info| DockerCollectMetricDto {
id: Some(info.container.unwrap().id).unwrap_or("".to_string()),
cpu: info
.cpu
.unwrap()
.cpu_usage_percent
.map(|load| DockerContainerCpuDto {
cpu_load: Some(load),
})
.unwrap_or(DockerContainerCpuDto { cpu_load: None }),
ram: info
.ram
.unwrap()
.memory_usage_percent
.map(|load| DockerContainerRamDto {
cpu_load: Some(load),
})
.unwrap_or(DockerContainerRamDto { cpu_load: None }),
network: DockerContainerNetworkDto {
net_in: info
.network
.as_ref()
.unwrap()
.rx_bytes
.map(|bytes| bytes as f64)
.or(Some(0.0)),
net_out: info
.network
.unwrap()
.tx_bytes
.map(|bytes| bytes as f64)
.or(Some(0.0)),
},
})
.collect();
let dto = DockerMetricDto {
server_id: 0,
containers: serde_json::to_string(&container_infos)?,
};
Ok(dto)
}
pub async fn create_registration_dto(
&self,
) -> Result<DockerRegistrationDto, Box<dyn Error + Send + Sync>> {
let containers = self.get_containers().await?;
let dto = DockerRegistrationDto {
server_id: 0,
//container_count,
containers: serde_json::to_string(&containers)?,
};
Ok(dto)
}
}
// Keep these as utility functions if needed, but they should use DockerManager internally
impl DockerContainer {
/// Returns the container ID
pub fn id(&self) -> &str {
&self.id
}
/// Returns the image name
pub fn image(&self) -> &str {
&self.image.as_deref().unwrap_or("unknown")
}
/// Returns the container name
pub fn name(&self) -> &str {
&self.name.as_deref().unwrap_or("unknown")
}
}

View File

@@ -0,0 +1,59 @@
//! Server-client communication utilities for WatcherAgent
//!
//! Handles server commands, Docker image updates, and container management using the Bollard library.
//!
use crate::models::ServerMessage;
use super::container::{restart_container, update_docker_image};
//use bollard::query_parameters::{CreateImageOptions, RestartContainerOptions};
use bollard::Docker;
use std::error::Error;
/// Handles a message from the backend server and dispatches the appropriate action.
///
/// # Arguments
/// * `docker` - Reference to a Bollard Docker client.
/// * `msg` - The server message to handle.
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if handled successfully, error otherwise.
pub async fn handle_server_message(
docker: &Docker,
msg: ServerMessage,
) -> Result<(), Box<dyn Error + Send + Sync>> {
let msg = msg.clone();
println!("Handling server message: {:?}", msg);
// Handle different message types
match msg.message_type.as_str() {
"update_image" => {
if let Some(image_name) = msg.data.get("image").and_then(|v| v.as_str()) {
println!("Received update command for image: {}", image_name);
// Call your update_docker_image function here
update_docker_image(docker, image_name).await?;
Ok(())
} else {
Err("Missing image name in update message".into())
}
}
"restart_container" => {
if let Some(image_name) = msg.data.get("image").and_then(|v| v.as_str()) {
println!("Received restart command for image: {}", image_name);
// Call your update_docker_image function here
restart_container(docker, image_name).await?;
Ok(())
} else {
Err("Missing image name in update message".into())
}
}
"stop_agent" => {
println!("Received stop agent command");
// Implement graceful shutdown
std::process::exit(0);
}
_ => {
eprintln!("Unknown message type: {}", msg.message_type);
Err(format!("Unknown message type: {}", msg.message_type).into())
}
}
}

View File

@@ -0,0 +1,99 @@
use super::ContainerCpuInfo;
use bollard::query_parameters::{ListContainersOptions, StatsOptions};
use bollard::Docker;
use futures_util::stream::TryStreamExt;
use std::error::Error;
/// Get CPU statistics for all containers
pub async fn get_all_containers_cpu_stats(
docker: &Docker,
) -> Result<Vec<ContainerCpuInfo>, Box<dyn Error + Send + Sync>> {
let containers = docker
.list_containers(Some(ListContainersOptions {
all: true,
..Default::default()
}))
.await?;
let mut cpu_infos = Vec::new();
for container in containers {
let id = container.id.unwrap_or_default();
// Skip if no ID
if id.is_empty() {
continue;
}
if let Some(cpu_info) = get_single_container_cpu_stats(docker, &id).await? {
cpu_infos.push(cpu_info);
}
}
Ok(cpu_infos)
}
/// Get CPU statistics for a specific container
pub async fn get_single_container_cpu_stats(
docker: &Docker,
container_id: &str,
) -> Result<Option<ContainerCpuInfo>, Box<dyn Error + Send + Sync>> {
let mut stats_stream = docker.stats(
container_id,
Some(StatsOptions {
stream: false,
one_shot: true,
}),
);
if let Some(stats) = stats_stream.try_next().await? {
if let (Some(cpu_stats), Some(precpu_stats)) = (&stats.cpu_stats, &stats.precpu_stats) {
if let (Some(cpu_usage), Some(pre_cpu_usage)) =
(&cpu_stats.cpu_usage, &precpu_stats.cpu_usage)
{
let cpu_delta = cpu_usage
.total_usage
.unwrap_or(0)
.saturating_sub(pre_cpu_usage.total_usage.unwrap_or(0));
let system_delta = cpu_stats
.system_cpu_usage
.unwrap_or(0)
.saturating_sub(precpu_stats.system_cpu_usage.unwrap_or(0));
let online_cpus = cpu_stats.online_cpus.unwrap_or(1);
let cpu_percent = if system_delta > 0 && online_cpus > 0 {
(cpu_delta as f64 / system_delta as f64) * online_cpus as f64 * 100.0
} else {
0.0
};
return Ok(Some(ContainerCpuInfo {
container_id: Some(container_id.to_string()),
cpu_usage_percent: Some(cpu_percent),
system_cpu_usage: Some(cpu_stats.system_cpu_usage.unwrap_or(0)),
container_cpu_usage: Some(cpu_usage.total_usage.unwrap_or(0)),
online_cpus: Some(online_cpus),
}));
}
}
}
Ok(None)
}
/// Get average CPU usage across all containers
pub async fn get_average_cpu_usage(docker: &Docker) -> Result<f64, Box<dyn Error + Send + Sync>> {
let cpu_infos = get_all_containers_cpu_stats(docker).await?;
if cpu_infos.is_empty() {
return Ok(0.0);
}
let total_cpu: f64 = cpu_infos
.iter()
.map(|cpu| cpu.cpu_usage_percent.unwrap())
.sum();
Ok(total_cpu / cpu_infos.len() as f64)
}

View File

@@ -0,0 +1,90 @@
pub mod cpu;
pub mod network;
pub mod ram;
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ContainerCpuInfo {
pub container_id: Option<String>,
pub cpu_usage_percent: Option<f64>,
pub system_cpu_usage: Option<u64>,
pub container_cpu_usage: Option<u64>,
pub online_cpus: Option<u32>,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ContainerNetworkInfo {
pub container_id: Option<String>,
pub rx_bytes: Option<u64>,
pub tx_bytes: Option<u64>,
pub rx_packets: Option<u64>,
pub tx_packets: Option<u64>,
pub rx_errors: Option<u64>,
pub tx_errors: Option<u64>,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ContainerMemoryInfo {
pub container_id: Option<String>,
pub memory_usage: Option<u64>,
pub memory_limit: Option<u64>,
pub memory_usage_percent: Option<f64>,
}
use bollard::Docker;
use std::error::Error;
/// Get container statistics for all containers using an existing Docker client
pub async fn get_container_stats(
docker: &Docker,
) -> Result<
(
Vec<ContainerCpuInfo>,
Vec<ContainerNetworkInfo>,
Vec<ContainerMemoryInfo>,
),
Box<dyn Error + Send + Sync>,
> {
let cpu_infos = cpu::get_all_containers_cpu_stats(docker).await?;
let net_infos = network::get_all_containers_network_stats(docker).await?;
let mem_infos = ram::get_all_containers_memory_stats(docker).await?;
Ok((cpu_infos, net_infos, mem_infos))
}
/// Get container statistics for a specific container
pub async fn get_single_container_stats(
docker: &Docker,
container_id: &str,
) -> Result<
(
Option<ContainerCpuInfo>,
Option<ContainerNetworkInfo>,
Option<ContainerMemoryInfo>,
),
Box<dyn Error + Send + Sync>,
> {
let cpu_info = cpu::get_single_container_cpu_stats(docker, container_id).await?;
let net_info = network::get_single_container_network_stats(docker, container_id).await?;
let mem_info = ram::get_single_container_memory_stats(docker, container_id).await?;
Ok((cpu_info, net_info, mem_info))
}
/// Get total network statistics across all containers
pub async fn get_total_network_stats(
docker: &Docker,
) -> Result<(u64, u64), Box<dyn Error + Send + Sync>> {
network::get_total_network_stats(docker).await
}
/// Get average CPU usage across all containers
pub async fn get_average_cpu_usage(docker: &Docker) -> Result<f64, Box<dyn Error + Send + Sync>> {
cpu::get_average_cpu_usage(docker).await
}
/// Get total memory usage across all containers
pub async fn get_total_memory_usage(docker: &Docker) -> Result<u64, Box<dyn Error + Send + Sync>> {
ram::get_total_memory_usage(docker).await
}

View File

@@ -0,0 +1,79 @@
use super::ContainerNetworkInfo;
use bollard::query_parameters::{ListContainersOptions, StatsOptions};
use bollard::Docker;
use futures_util::stream::TryStreamExt;
use std::error::Error;
/// Get network statistics for all containers
pub async fn get_all_containers_network_stats(
docker: &Docker,
) -> Result<Vec<ContainerNetworkInfo>, Box<dyn Error + Send + Sync>> {
let containers = docker
.list_containers(Some(ListContainersOptions {
all: true,
..Default::default()
}))
.await?;
let mut net_infos = Vec::new();
for container in containers {
let id = container.id.unwrap_or_default();
// Skip if no ID
if id.is_empty() {
continue;
}
if let Some(net_info) = get_single_container_network_stats(docker, &id).await? {
net_infos.push(net_info);
}
}
Ok(net_infos)
}
/// Get network statistics for a specific container
pub async fn get_single_container_network_stats(
docker: &Docker,
container_id: &str,
) -> Result<Option<ContainerNetworkInfo>, Box<dyn Error + Send + Sync>> {
let mut stats_stream = docker.stats(
container_id,
Some(StatsOptions {
stream: false,
one_shot: true,
}),
);
if let Some(stats) = stats_stream.try_next().await? {
if let Some(networks) = stats.networks {
// Take the first network interface (usually eth0)
if let Some((_name, net)) = networks.into_iter().next() {
return Ok(Some(ContainerNetworkInfo {
container_id: Some(container_id.to_string()),
rx_bytes: net.rx_bytes,
tx_bytes: net.tx_bytes,
rx_packets: net.rx_packets,
tx_packets: net.tx_packets,
rx_errors: net.rx_errors,
tx_errors: net.tx_errors,
}));
}
}
}
Ok(None)
}
/// Get total network statistics across all containers
pub async fn get_total_network_stats(
docker: &Docker,
) -> Result<(u64, u64), Box<dyn Error + Send + Sync>> {
let net_infos = get_all_containers_network_stats(docker).await?;
let total_rx: u64 = net_infos.iter().map(|net| net.rx_bytes.unwrap()).sum();
let total_tx: u64 = net_infos.iter().map(|net| net.tx_bytes.unwrap()).sum();
Ok((total_rx, total_tx))
}

View File

@@ -0,0 +1,77 @@
use super::ContainerMemoryInfo;
use bollard::query_parameters::{ListContainersOptions, StatsOptions};
use bollard::Docker;
use futures_util::stream::TryStreamExt;
use std::error::Error;
/// Get memory statistics for all containers
pub async fn get_all_containers_memory_stats(
docker: &Docker,
) -> Result<Vec<ContainerMemoryInfo>, Box<dyn Error + Send + Sync>> {
let containers = docker
.list_containers(Some(ListContainersOptions {
all: true,
..Default::default()
}))
.await?;
let mut mem_infos = Vec::new();
for container in containers {
let id = container.id.unwrap_or_default();
// Skip if no ID
if id.is_empty() {
continue;
}
if let Some(mem_info) = get_single_container_memory_stats(docker, &id).await? {
mem_infos.push(mem_info);
}
}
Ok(mem_infos)
}
/// Get memory statistics for a specific container
pub async fn get_single_container_memory_stats(
docker: &Docker,
container_id: &str,
) -> Result<Option<ContainerMemoryInfo>, Box<dyn Error + Send + Sync>> {
let mut stats_stream = docker.stats(
container_id,
Some(StatsOptions {
stream: false,
one_shot: true,
}),
);
if let Some(stats) = stats_stream.try_next().await? {
if let Some(memory_stats) = &stats.memory_stats {
let memory_usage = memory_stats.usage.unwrap_or(0);
let memory_limit = memory_stats.limit.unwrap_or(1); // Avoid division by zero
let memory_usage_percent = if memory_limit > 0 {
(memory_usage as f64 / memory_limit as f64) * 100.0
} else {
0.0
};
return Ok(Some(ContainerMemoryInfo {
container_id: Some(container_id.to_string()),
memory_usage: Some(memory_usage),
memory_limit: Some(memory_limit),
memory_usage_percent: Some(memory_usage_percent),
}));
}
}
Ok(None)
}
/// Get total memory usage across all containers
pub async fn get_total_memory_usage(docker: &Docker) -> Result<u64, Box<dyn Error + Send + Sync>> {
let mem_infos = get_all_containers_memory_stats(docker).await?;
let total_memory: u64 = mem_infos.iter().map(|mem| mem.memory_usage.unwrap()).sum();
Ok(total_memory)
}

View File

@@ -2,6 +2,29 @@ use anyhow::Result;
use std::error::Error;
use sysinfo::System;
/// # CPU Hardware Module
///
/// This module provides CPU information collection for WatcherAgent, including load, temperature, and system uptime.
///
/// ## Responsibilities
/// - **CPU Detection:** Identifies CPU model and core count.
/// - **Metric Collection:** Queries CPU load, temperature, and uptime.
/// - **Error Handling:** Graceful fallback if metrics are unavailable.
///
/// ## Units
/// - `current_load`: CPU usage as a percentage (**0.0100.0**)
/// - `current_temp`: CPU temperature in **degrees Celsius (°C)**
/// - `uptime`: System uptime in **seconds (s)**
///
/// CPU statistics for the host system.
///
/// # Fields
/// - `name`: CPU model name (string)
/// - `cores`: Number of physical CPU cores (integer)
/// - `current_load`: CPU usage as a percentage (**0.0100.0**)
/// - `current_temp`: CPU temperature in **degrees Celsius (°C)**
/// - `uptime`: System uptime in **seconds (s)**
/// - `host_name`: Hostname of the system (string)
#[derive(Debug)]
pub struct CpuInfo {
pub name: Option<String>,
@@ -12,6 +35,10 @@ pub struct CpuInfo {
pub host_name: Option<String>,
}
/// Collects CPU information (model, cores, load, temperature, uptime).
///
/// # Returns
/// * `Result<CpuInfo, Box<dyn Error + Send + Sync>>` - CPU statistics or error if unavailable.
pub async fn get_cpu_info() -> Result<CpuInfo, Box<dyn Error + Send + Sync>> {
let mut sys = System::new_all();
@@ -33,12 +60,23 @@ pub async fn get_cpu_info() -> Result<CpuInfo, Box<dyn Error + Send + Sync>> {
})
}
/// Queries system for current CPU load (percentage).
///
/// # Arguments
/// * `sys` - Mutable reference to sysinfo::System
///
/// # Returns
/// * `Result<f64, Box<dyn Error + Send + Sync>>` - CPU load as percentage.
pub async fn get_cpu_load(sys: &mut System) -> Result<f64, Box<dyn Error + Send + Sync>> {
sys.refresh_cpu_all();
tokio::task::yield_now().await; // Allow other tasks to run
Ok(sys.global_cpu_usage() as f64)
}
/// Attempts to read CPU temperature from system sensors (Linux only).
///
/// # Returns
/// * `Result<f64, Box<dyn Error + Send + Sync>>` - CPU temperature in degrees Celsius (°C).
pub async fn get_cpu_temp() -> Result<f64, Box<dyn Error + Send + Sync>> {
println!("Attempting to get CPU temperature...");

View File

@@ -1,130 +1,165 @@
use std::error::Error;
use crate::models::DiskInfoDetailed;
use std::error::Error;
use anyhow::Result;
use sysinfo::DiskUsage;
use sysinfo::{Component, Components, Disk, Disks, System};
use sysinfo::{Component, Components, Disk, Disks};
use serde::Serialize;
#[derive(Debug)]
/// # Disk Hardware Module
///
/// This module provides disk information collection for WatcherAgent, including total and per-disk statistics and temperature data.
///
/// ## Responsibilities
/// - **Disk Enumeration:** Lists all physical disks and their properties.
/// - **Usage Calculation:** Computes total and per-disk usage, available space, and usage percentage.
/// - **Temperature Monitoring:** Associates disk components with temperature sensors if available.
///
/// ## Units
/// - All sizes are in **bytes** unless otherwise noted.
/// - Temperatures are in **degrees Celsius (°C)**.
///
/// Summary of disk statistics for the system.
///
/// # Fields
/// - `total_size`: Total disk size in bytes (all disks > 100MB)
/// - `total_used`: Total used disk space in bytes
/// - `total_available`: Total available disk space in bytes
/// - `total_usage`: Usage percentage (0.0100.0)
/// - `detailed_info`: Vector of [`DiskInfoDetailed`] for each disk
#[derive(Serialize, Debug)]
pub struct DiskInfo {
pub total: Option<f64>,
pub used: Option<f64>,
pub free: Option<f64>,
pub total_size: Option<f64>,
pub total_used: Option<f64>,
pub total_available: Option<f64>,
pub total_usage: Option<f64>,
pub detailed_info: Vec<DiskInfoDetailed>,
}
pub async fn get_disk_info() -> Result<DiskInfo> {
/// Collects disk information for all detected disks, including usage and temperature.
///
/// This function enumerates all disks, calculates usage statistics, and attempts to associate temperature sensors with disk components.
///
/// # Returns
/// * `Result<DiskInfo, Box<dyn std::error::Error + Send + Sync>>` - Disk statistics and details, or error if collection fails.
pub async fn get_disk_info() -> Result<DiskInfo, Box<dyn std::error::Error + Send + Sync>> {
let disks = Disks::new_with_refreshed_list();
let _disk_types = [
sysinfo::DiskKind::HDD,
sysinfo::DiskKind::SSD,
sysinfo::DiskKind::Unknown(0),
];
let (_, _, _, _) = get_disk_utitlization().unwrap();
let mut total = 0;
let mut used = 0;
let mut detailed_info = Vec::new();
// Collect detailed disk information
for disk in disks.list() {
if disk.total_space() > 100 * 1024 * 1024 {
// > 100MB
total += disk.total_space();
used += disk.total_space() - disk.available_space();
if disk.kind() == sysinfo::DiskKind::Unknown(0) {
continue;
}
let disk_used = disk.total_space() - disk.available_space();
detailed_info.push(DiskInfoDetailed {
disk_name: disk.name().to_string_lossy().into_owned(),
disk_kind: format!("{:?}", disk.kind()),
disk_total_space: disk.total_space() as f64,
disk_available_space: disk.available_space() as f64,
disk_used_space: disk_used as f64,
disk_mount_point: disk.mount_point().to_string_lossy().into_owned(),
component_disk_label: String::new(),
component_disk_temperature: 0.0,
});
}
// Get component temperatures
let components = Components::new_with_refreshed_list();
for component in &components {
if let Some(temperature) = component.temperature() {
// Update detailed info with temperature data if it matches a disk component
for disk_info in &mut detailed_info {
if component.label().contains(&disk_info.disk_name) {
disk_info.component_disk_label = component.label().to_string();
disk_info.component_disk_temperature = temperature;
}
}
}
}
// Calculate totals (only disks > 100MB)
let (total_size, total_used, total_available) = calculate_disk_totals(&disks);
let (total_size, total_used, total_available, total_usage) = if total_size > 0.0 {
(total_size, total_used, total_available, (total_used / total_size) * 100.0)
} else {
match get_disk_info_fallback() {
Ok(fallback_data) => fallback_data,
Err(_) => (0.0, 0.0, 0.0, 0.0), // Default values if fallback fails
}
};
Ok(DiskInfo {
total: Some(total as f64),
used: Some(used as f64),
free: Some((total - used) as f64),
total_size: if total_size > 0.0 { Some(total_size) } else { None },
total_used: if total_used > 0.0 { Some(total_used) } else { None },
total_available: if total_available > 0.0 { Some(total_available) } else { None },
total_usage: if total_usage > 0.0 { Some(total_usage) } else { None },
detailed_info,
})
}
pub fn get_disk_utitlization() -> Result<(f64, f64, f64, f64), Box<dyn Error>> {
let mut sys = System::new();
sys.refresh_all();
let mut count = 0;
fn calculate_disk_totals(disks: &Disks) -> (f64, f64, f64) {
let mut total_size = 0u64;
let mut total_used = 0u64;
let mut total_available = 0u64;
let disks = Disks::new_with_refreshed_list();
for disk in disks.list() {
// Only print disks with known kind
if disk.kind() == sysinfo::DiskKind::Unknown(0) {
continue;
}
println!(
"Disk_Name: {:?}:\n---- Disk_Kind: {},\n---- Total: {},\n---- Available: {},\n---- Used: {}, \n---- Mount_Point: {:?}",
disk.name(),
disk.kind(),
disk.total_space(),
disk.available_space(),
disk.total_space() - disk.available_space(),
disk.mount_point()
);
}
let components = Components::new_with_refreshed_list();
for component in &components {
if let Some(temperature) = component.temperature() {
println!(
"Component_Label: {}, Temperature: {}°C",
component.label(),
temperature
);
if disk.total_space() > 100 * 1024 * 1024 { // > 100MB
total_size += disk.total_space();
total_available += disk.available_space();
total_used += disk.total_space() - disk.available_space();
}
}
// Berechnungen
let total_size = if count > 0 {
total_size as f64 // in Bytes
} else {
// Fallback: Versuche df unter Linux
println!("Fallback: Using 'df' command to get disk info.");
(total_size as f64, total_used as f64, total_available as f64)
}
#[cfg(target_os = "linux")]
{
fn get_disk_info_fallback() -> Result<(f64, f64, f64, f64), Box<dyn Error + Send + Sync>> {
use std::process::Command;
if let Ok(output) = Command::new("df")
let output = Command::new("df")
.arg("-B1")
.arg("--output=size,used")
.output()
{
.arg("--output=size,used,avail")
.output()?;
let stdout = String::from_utf8_lossy(&output.stdout);
let mut total_size = 0u64;
let mut total_used = 0u64;
let mut total_available = 0u64;
let mut count = 0;
for line in stdout.lines().skip(1) {
let parts: Vec<&str> = line.split_whitespace().collect();
if parts.len() == 2 {
if let (Ok(size), Ok(used)) =
(parts[0].parse::<u64>(), parts[1].parse::<u64>())
{
if parts.len() >= 3 {
if let (Ok(size), Ok(used), Ok(avail)) = (
parts[0].parse::<u64>(),
parts[1].parse::<u64>(),
parts[2].parse::<u64>(),
) {
total_size += size;
total_used += used;
total_available += avail;
count += 1;
}
}
}
total_size as f64 // in Bytes
} else {
0.0
}
}
#[cfg(not(target_os = "linux"))]
{
0.0
}
};
let usage = if total_size > 0.0 {
let usage = if total_size > 0 {
(total_used as f64 / total_size as f64) * 100.0
} else {
0.0
};
Ok((
total_size,
total_used as f64,
total_available as f64,
usage as f64,
)) // Disk-Temp bleibt 0.0 ohne spezielle Hardware
Ok((total_size as f64, total_used as f64, total_available as f64, usage))
}
#[cfg(not(target_os = "linux"))]
fn get_disk_info_fallback() -> Result<(f64, f64, f64, f64), Box<dyn Error + Send + Sync>> {
Ok((0.0, 0.0, 0.0, 0.0))
}
pub fn _get_disk_temp_for_component(component: &Component) -> Option<f64> {

View File

@@ -2,6 +2,29 @@ use anyhow::Result;
use nvml_wrapper::Nvml;
use std::error::Error;
/// # GPU Hardware Module
///
/// This module provides GPU information collection for WatcherAgent, including load, temperature, and VRAM statistics.
///
/// ## Responsibilities
/// - **GPU Detection:** Identifies GPU model and capabilities.
/// - **Metric Collection:** Queries GPU load, temperature, and VRAM usage using NVML (NVIDIA only).
/// - **Error Handling:** Graceful fallback if GPU or NVML is unavailable.
///
/// ## Units
/// - `current_load`: GPU usage as a percentage (**0.0100.0**)
/// - `current_temp`: GPU temperature in **degrees Celsius (°C)**
/// - `vram_total`: Total VRAM in **bytes**
/// - `vram_used`: Used VRAM in **bytes**
///
/// GPU statistics for the host system.
///
/// # Fields
/// - `name`: GPU model name (string)
/// - `current_load`: GPU usage as a percentage (**0.0100.0**)
/// - `current_temp`: GPU temperature in **degrees Celsius (°C)**
/// - `vram_total`: Total VRAM in **bytes**
/// - `vram_used`: Used VRAM in **bytes**
#[derive(Debug)]
pub struct GpuInfo {
pub name: Option<String>,
@@ -11,6 +34,12 @@ pub struct GpuInfo {
pub vram_used: Option<f64>,
}
/// Collects GPU information (load, temperature, VRAM) using NVML.
///
/// This function attempts to query the first NVIDIA GPU using NVML. If unavailable, it returns a fallback with only the detected GPU name.
///
/// # Returns
/// * `Result<GpuInfo, Box<dyn Error + Send + Sync>>` - GPU statistics or fallback if unavailable.
pub async fn get_gpu_info() -> Result<GpuInfo, Box<dyn Error + Send + Sync>> {
match get_gpu_metrics() {
Ok((gpu_temp, gpu_load, vram_used, vram_total)) => {
@@ -37,6 +66,10 @@ pub async fn get_gpu_info() -> Result<GpuInfo, Box<dyn Error + Send + Sync>> {
}
}
/// Queries NVML for GPU metrics: temperature, load, VRAM used/total.
///
/// # Returns
/// * `Result<(f64, f64, f64, f64), Box<dyn Error + Send + Sync>>` - Tuple of (temperature °C, load %, VRAM used bytes, VRAM total bytes).
pub fn get_gpu_metrics() -> Result<(f64, f64, f64, f64), Box<dyn Error + Send + Sync>> {
let nvml = Nvml::init();
if let Ok(nvml) = nvml {

View File

@@ -3,25 +3,56 @@ use std::error::Error;
use anyhow::Result;
use sysinfo::System;
/// # Memory Hardware Module
///
/// This module provides memory information collection for WatcherAgent, including total, used, and free RAM.
///
/// ## Responsibilities
/// - **Memory Detection:** Queries system for total, used, and free RAM.
/// - **Usage Calculation:** Computes memory usage percentage.
/// - **Error Handling:** Graceful fallback if metrics are unavailable.
///
/// ## Units
/// - `total`, `used`, `free`: RAM in **bytes**
///
/// Memory statistics for the host system.
///
/// # Fields
/// - `total`: Total RAM in **bytes**
/// - `used`: Used RAM in **bytes**
/// - `free`: Free RAM in **bytes**
#[derive(Debug)]
pub struct MemoryInfo {
pub total: Option<f64>,
pub total_size: Option<f64>,
pub used: Option<f64>,
pub free: Option<f64>,
pub current_load: Option<f64>,
}
pub async fn get_memory_info() -> Result<MemoryInfo> {
/// Collects memory information (total, used, free RAM).
///
/// # Returns
/// * `Result<MemoryInfo>` - Memory statistics or error if unavailable.
pub async fn get_memory_info() -> Result<MemoryInfo, Box<dyn Error + Send + Sync>> {
let mut sys = System::new();
sys.refresh_memory();
Ok(MemoryInfo {
total: Some(sys.total_memory() as f64),
total_size: Some(sys.total_memory() as f64),
used: Some(sys.used_memory() as f64),
free: Some(sys.free_memory() as f64),
current_load: Some(get_memory_usage(&mut sys).unwrap() as f64)
})
}
pub fn _get_memory_usage(sys: &mut System) -> Result<f64, Box<dyn Error + Send + Sync>> {
/// Computes memory usage percentage from sysinfo::System.
///
/// # Arguments
/// * `sys` - Mutable reference to sysinfo::System
///
/// # Returns
/// * `Result<f64, Box<dyn Error + Send + Sync>>` - Memory usage as percentage.
pub fn get_memory_usage(sys: &mut System) -> Result<f64, Box<dyn Error + Send + Sync>> {
sys.refresh_memory();
Ok((sys.used_memory() as f64 / sys.total_memory() as f64) * 100.0)
}

View File

@@ -14,6 +14,23 @@ pub use memory::get_memory_info;
pub use network::get_network_info;
pub use network::NetworkMonitor;
/// # Hardware Module
///
/// This module aggregates all hardware subsystems for WatcherAgent, providing unified collection and access to CPU, GPU, memory, disk, and network statistics.
///
/// ## Responsibilities
/// - **Subsystem Aggregation:** Combines all hardware modules into a single struct for easy access.
/// - **Unified Collection:** Provides a single async method to collect all hardware metrics at once.
///
/// Aggregated hardware statistics for the host system.
///
/// # Fields
/// - `cpu`: CPU statistics (see [`CpuInfo`])
/// - `gpu`: GPU statistics (see [`GpuInfo`])
/// - `memory`: Memory statistics (see [`MemoryInfo`])
/// - `disk`: Disk statistics (see [`DiskInfo`])
/// - `network`: Network statistics (see [`NetworkInfo`])
/// - `network_monitor`: Rolling monitor for network bandwidth
#[derive(Debug)]
pub struct HardwareInfo {
pub cpu: cpu::CpuInfo,
@@ -25,6 +42,10 @@ pub struct HardwareInfo {
}
impl HardwareInfo {
/// Collects all hardware statistics asynchronously.
///
/// # Returns
/// * `Result<HardwareInfo, Box<dyn Error + Send + Sync>>` - Aggregated hardware statistics or error if any subsystem fails.
pub async fn collect() -> Result<Self, Box<dyn Error + Send + Sync>> {
let mut network_monitor = network::NetworkMonitor::new();
Ok(Self {

View File

@@ -2,6 +2,24 @@ use std::error::Error;
use std::result::Result;
use std::time::Instant;
/// # Network Hardware Module
///
/// This module provides network information collection for WatcherAgent, including interface enumeration and bandwidth statistics.
///
/// ## Responsibilities
/// - **Interface Detection:** Lists all network interfaces.
/// - **Bandwidth Monitoring:** Tracks receive/transmit rates using a rolling monitor.
/// - **Error Handling:** Graceful fallback if metrics are unavailable.
///
/// ## Units
/// - `rx_rate`, `tx_rate`: Network bandwidth in **bytes per second (B/s)**
///
/// Network statistics for the host system.
///
/// # Fields
/// - `interfaces`: List of network interface names (strings)
/// - `rx_rate`: Receive bandwidth in **bytes per second (B/s)**
/// - `tx_rate`: Transmit bandwidth in **bytes per second (B/s)**
#[derive(Debug)]
pub struct NetworkInfo {
pub interfaces: Option<Vec<String>>,
@@ -9,6 +27,13 @@ pub struct NetworkInfo {
pub tx_rate: Option<f64>,
}
/// Rolling monitor for network bandwidth statistics.
///
/// # Fields
/// - `prev_rx`: Previous received bytes
/// - `prev_tx`: Previous transmitted bytes
/// - `last_update`: Timestamp of last update
#[derive(Debug)]
pub struct NetworkMonitor {
prev_rx: u64,
@@ -23,6 +48,7 @@ impl Default for NetworkMonitor {
}
impl NetworkMonitor {
/// Creates a new `NetworkMonitor` for bandwidth tracking.
pub fn new() -> Self {
Self {
prev_rx: 0,
@@ -31,6 +57,10 @@ impl NetworkMonitor {
}
}
/// Updates the network usage statistics and returns current rx/tx rates.
///
/// # Returns
/// * `Result<(f64, f64), Box<dyn Error>>` - Tuple of (rx_rate, tx_rate) in bytes per second.
pub fn update_usage(&mut self) -> Result<(f64, f64), Box<dyn Error>> {
let (current_rx, current_tx) = get_network_bytes()?;
let elapsed = self.last_update.elapsed().as_secs_f64();
@@ -55,6 +85,13 @@ impl NetworkMonitor {
}
}
/// Collects network information (interfaces, rx/tx rates) using a monitor.
///
/// # Arguments
/// * `monitor` - Mutable reference to a `NetworkMonitor`
///
/// # Returns
/// * `Result<NetworkInfo, Box<dyn Error>>` - Network statistics or error if unavailable.
pub async fn get_network_info(monitor: &mut NetworkMonitor) -> Result<NetworkInfo, Box<dyn Error>> {
let (rx_rate, tx_rate) = monitor.update_usage()?;
Ok(NetworkInfo {

View File

@@ -1,18 +1,55 @@
/// WatcherAgent - A Rust-based system monitoring agent
/// This agent collects hardware metrics and sends them to a backend server.
/// It supports CPU, GPU, RAM, disk, and network metrics.
/// # WatcherAgent
///
/// **WatcherAgent** is a cross-platform system monitoring agent written in Rust.
///
/// ## Overview
/// This agent collects real-time hardware metrics (CPU, GPU, RAM, disk, network) and communicates with a backend server for registration, reporting, and remote control. It is designed for deployment in environments where automated monitoring and remote management of system resources is required.
///
/// ## Features
/// - **Hardware Metrics:** Collects CPU, GPU, RAM, disk, and network statistics using platform-specific APIs.
/// - **Docker Integration:** Detects and manages its own Docker container, supports image updates and container restarts.
/// - **Server Communication:** Registers with a backend server, sends periodic heartbeats, and reports metrics securely.
/// - **Remote Commands:** Listens for and executes commands from the backend (e.g., update image, restart container, stop agent).
///
/// ## Modules
/// - [`api`]: Handles HTTP communication with the backend server (registration, heartbeat, metrics, commands).
/// - [`hardware`]: Collects hardware metrics from the host system (CPU, GPU, RAM, disk, network).
/// - [`metrics`]: Orchestrates metric collection and reporting.
/// - [`models`]: Defines data structures for server communication and metrics.
/// - [`docker`]: Integrates with Docker for container management and agent lifecycle.
///
/// ## Usage
/// Run the agent with the backend server URL as an argument:
/// ```sh
/// watcheragent <server-url>
/// ```
///
/// The agent will register itself, start collecting metrics, and listen for remote commands.
pub mod api;
pub mod docker;
pub mod hardware;
pub mod metrics;
pub mod models;
use bollard::Docker;
use std::env;
use std::error::Error;
use std::marker::Send;
use std::marker::Sync;
use std::result::Result;
use std::sync::Arc;
use tokio::task::JoinHandle;
/// Awaits a spawned asynchronous task and flattens its nested `Result` type.
///
/// This utility is used to handle the result of a `tokio::spawn`ed task that itself returns a `Result`,
/// propagating any errors from both the task and its execution.
///
/// # Type Parameters
/// * `T` - The type returned by the task on success.
///
/// # Arguments
/// * `handle` - The `JoinHandle` of the spawned task.
///
/// # Returns
/// * `Result<T, Box<dyn Error + Send + Sync>>` - The result of the task, or an error if the task failed or panicked.
async fn flatten<T>(
handle: JoinHandle<Result<T, Box<dyn Error + Send + Sync>>>,
) -> Result<T, Box<dyn Error + Send + Sync>> {
@@ -23,29 +60,84 @@ async fn flatten<T>(
}
}
/// Main entry point for the WatcherAgent application.
///
/// This function performs the following steps:
/// 1. Initializes the Docker client for container management.
/// 2. Detects the current running image version.
/// 3. Parses command-line arguments to obtain the backend server URL.
/// 4. Registers the agent with the backend server and retrieves its server ID and IP address.
/// 5. Spawns background tasks for:
/// - Listening for remote commands from the server
/// - Sending periodic heartbeat signals
/// - Collecting and reporting hardware metrics
/// 6. Waits for all background tasks to complete and logs their results.
///
/// # Arguments
/// * `server-url` - The URL of the backend server to register and report metrics to (passed as a command-line argument).
///
/// # Errors
/// Returns an error if registration or any background task fails, or if required arguments are missing.
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error + Send + Sync>> {
// Parse command-line arguments
let args: Vec<String> = env::args().collect();
// args[0] is the binary name, args[1] is the first actual argument
if args.len() < 2 {
eprintln!("Usage: {} <server-url>", args[0]);
return Err("Missing server URL argument".into());
}
let server_url = &args[1];
println!("Server URL: {:?}", server_url);
// Registration
// Registration with backend server
let (server_id, ip) = match api::register_with_server(&server_url).await {
Ok((id, ip)) => (id, ip),
Ok((id, ip)) => {
println!("Registered with server. ID: {}, IP: {}", id, ip);
(id, ip)
}
Err(e) => {
eprintln!("Fehler bei der Registrierung am Server: {e}");
return Err(e);
}
};
// Initialize Docker (optional - agent can run without Docker)
let docker_manager = docker::DockerManager::new_optional();
// Get current image version
let client_version = if let Some(ref docker_manager) = docker_manager {
docker_manager.get_client_version().await
} else {
"unknown".to_string()
};
println!("Client Version: {}", client_version);
// Prepare Docker registration DTO
let container_dto = if let Some(ref docker_manager) = docker_manager {
docker_manager.create_registration_dto().await?
} else {
models::DockerRegistrationDto {
server_id: 0,
//container_count: 0, --- IGNORE ---
containers: "[]".to_string(),
}
};
let _ =
api::broadcast_docker_containers(server_url, server_id, &mut container_dto.clone()).await?;
// Start background tasks
// Start server listening for commands (only if Docker is available)
let listening_handle = if let Some(ref docker_manager) = docker_manager {
tokio::spawn({
let docker = docker_manager.docker.clone();
let server_url = server_url.to_string();
async move { api::listening_to_server(&docker, &server_url).await }
})
} else {
println!("Docker not available, skipping server command listener.");
tokio::spawn(async { Ok(()) }) // Dummy task
};
// Start heartbeat in background
let heartbeat_handle = tokio::spawn({
let ip = ip.clone();
@@ -58,20 +150,23 @@ async fn main() -> Result<(), Box<dyn Error + Send + Sync>> {
let metrics_handle = tokio::spawn({
let ip = ip.clone();
let server_url = server_url.to_string();
let docker_manager = docker_manager.as_ref().cloned().unwrap();
async move {
let mut collector = metrics::Collector::new(server_id, ip);
let mut collector = metrics::Collector::new(server_id, ip, docker_manager);
collector.run(&server_url).await
}
});
// Warte auf beide Tasks und prüfe explizit auf Fehler
let (heartbeat_handle, metrics_handle) =
tokio::try_join!(flatten(heartbeat_handle), flatten(metrics_handle))?;
// Wait for all tasks and check for errors
let (listening_result, heartbeat_result, metrics_result) = tokio::try_join!(
flatten(listening_handle),
flatten(heartbeat_handle),
flatten(metrics_handle)
)?;
let (heartbeat, metrics) = (heartbeat_handle, metrics_handle);
println!(
"All tasks completed successfully: {:?}, {:?}.",
heartbeat, metrics
"All tasks completed: listening={:?}, heartbeat={:?}, metrics={:?}",
listening_result, heartbeat_result, metrics_result
);
println!("All tasks completed successfully.");

View File

@@ -1,26 +1,66 @@
/// # Metrics Module
///
/// This module orchestrates the collection and reporting of hardware and network metrics for WatcherAgent.
///
/// ## Responsibilities
/// - **Metric Collection:** Gathers real-time statistics from all hardware subsystems (CPU, GPU, RAM, disk, network).
/// - **Reporting:** Periodically sends metrics to the backend server using the API module.
/// - **Error Handling:** Robust to hardware failures and network errors, with retry logic and logging.
///
/// ## Usage
/// The [`Collector`] struct is instantiated in the main loop and runs as a background task, continuously collecting and reporting metrics.
use std::error::Error;
use std::time::Duration;
use crate::api;
use crate::docker::DockerManager;
//use crate::docker::DockerInfo;
use crate::hardware::network::NetworkMonitor;
use crate::hardware::HardwareInfo;
use crate::models::MetricDto;
use crate::models::{DockerMetricDto, MetricDto};
/// Main orchestrator for hardware and network metric collection and reporting.
///
/// The `Collector` struct manages the state required to collect metrics and send them to the backend server. It maintains a network monitor for bandwidth tracking, the agent's server ID, and its IP address.
///
/// # Fields
/// - `network_monitor`: Tracks network usage rates (rx/tx).
/// - `server_id`: Unique server ID assigned by the backend.
/// - `ip_address`: IP address of the agent.
pub struct Collector {
docker_manager: DockerManager,
network_monitor: NetworkMonitor,
server_id: i32,
server_id: u16,
ip_address: String,
}
impl Collector {
pub fn new(server_id: i32, ip_address: String) -> Self {
/// Creates a new `Collector` instance for metric collection and reporting.
///
/// # Arguments
/// * `server_id` - The server ID assigned by the backend.
/// * `ip_address` - The IP address of the agent.
///
/// # Returns
/// A new `Collector` ready to collect and report metrics.
pub fn new(server_id: u16, ip_address: String, docker_manager: DockerManager) -> Self {
Self {
docker_manager,
network_monitor: NetworkMonitor::new(),
server_id,
ip_address,
}
}
/// Runs the main metrics collection loop, periodically sending metrics to the backend server.
///
/// This function continuously collects hardware and network metrics, sends them to the backend, and handles errors gracefully. It uses a configurable interval and retries on failures.
///
/// # Arguments
/// * `base_url` - The base URL of the backend server.
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if metrics are sent successfully.
pub async fn run(&mut self, base_url: &str) -> Result<(), Box<dyn Error + Send + Sync>> {
loop {
println!(
@@ -35,11 +75,26 @@ impl Collector {
continue;
}
};
let docker_metrics = match self.docker_collect().await {
Ok(metrics) => metrics,
Err(e) => {
eprintln!("Error collecting docker metrics: {}", e);
tokio::time::sleep(Duration::from_secs(10)).await;
continue;
}
};
api::send_metrics(base_url, &metrics).await?;
api::send_docker_metrics(base_url, &docker_metrics).await?;
tokio::time::sleep(Duration::from_secs(20)).await;
}
}
/// Collects hardware and network metrics from all subsystems.
///
/// This function queries the hardware module for CPU, GPU, RAM, disk, and network statistics, and packages them into a [`MetricDto`] for reporting.
///
/// # Returns
/// * `Result<MetricDto, Box<dyn Error + Send + Sync>>` - The collected metrics or an error if hardware info is unavailable.
pub async fn collect(&mut self) -> Result<MetricDto, Box<dyn Error + Send + Sync>> {
let hardware = match HardwareInfo::collect().await {
Ok(hw) => hw,
@@ -59,14 +114,24 @@ impl Collector {
gpu_load: hardware.gpu.current_load.unwrap_or_default(),
gpu_temp: hardware.gpu.current_temp.unwrap_or_default(),
gpu_vram_size: hardware.gpu.vram_total.unwrap_or_default(),
gpu_vram_usage: hardware.gpu.vram_used.unwrap_or_default(),
ram_load: hardware.memory.used.unwrap_or_default(),
ram_size: hardware.memory.total.unwrap_or_default(),
disk_size: hardware.disk.total.unwrap_or_default(),
disk_usage: hardware.disk.used.unwrap_or_default(),
gpu_vram_load: hardware.gpu.current_load.unwrap_or_default(),
ram_load: hardware.memory.current_load.unwrap_or_default(),
ram_size: hardware.memory.total_size.unwrap_or_default(),
disk_size: hardware.disk.total_size.unwrap_or_default(),
disk_usage: hardware.disk.total_usage.unwrap_or_default(),
disk_temp: 0.0, // not supported
net_rx: hardware.network.rx_rate.unwrap_or_default(),
net_tx: hardware.network.tx_rate.unwrap_or_default(),
})
}
/// NOTE: This is a compilation-safe stub. Implement the Docker collection using your
/// DockerManager API and container helpers when available.
pub async fn docker_collect(&self) -> Result<DockerMetricDto, Box<dyn Error + Send + Sync>> {
let metrics = self.docker_manager.collect_metrics().await?;
Ok(DockerMetricDto {
server_id: self.server_id,
containers: metrics.containers,
})
}
}

View File

@@ -1,10 +1,31 @@
/// # Models Module
///
/// This module defines all data structures (DTOs) used for communication between WatcherAgent and the backend server, as well as hardware metrics and Docker container info.
///
/// ## Responsibilities
/// - **DTOs:** Define payloads for registration, metrics, heartbeat, and server commands.
/// - **Units:** All struct fields are documented with their units for clarity and API compatibility.
/// - **Docker Info:** Structures for representing Docker container state and statistics.
///
/// ## Usage
/// These types are serialized/deserialized for HTTP communication and used throughout the agent for data exchange.
use crate::docker::stats;
use serde::{Deserialize, Serialize};
// Data structures matching the C# DTOs
/// Registration data sent to the backend server.
///
/// ## Units
/// - `id`: Unique server identifier (integer)
/// - `ip_address`: IPv4 or IPv6 address (string)
/// - `cpu_type`: CPU model name (string)
/// - `cpu_cores`: Number of physical CPU cores (integer)
/// - `gpu_type`: GPU model name (string)
/// - `ram_size`: Total RAM size in **megabytes (MB)**
#[derive(Serialize, Debug)]
pub struct RegistrationDto {
#[serde(rename = "id")]
pub id: i32,
pub server_id: u16,
#[serde(rename = "ipAddress")]
pub ip_address: String,
#[serde(rename = "cpuType")]
@@ -17,10 +38,28 @@ pub struct RegistrationDto {
pub ram_size: f64,
}
/// Hardware and network metrics data sent to the backend server.
///
/// ## Units
/// - `server_id`: Unique server identifier (integer)
/// - `ip_address`: IPv4 or IPv6 address (string)
/// - `cpu_load`: CPU usage as a percentage (**0.0100.0**)
/// - `cpu_temp`: CPU temperature in **degrees Celsius (°C)**
/// - `gpu_load`: GPU usage as a percentage (**0.0100.0**)
/// - `gpu_temp`: GPU temperature in **degrees Celsius (°C)**
/// - `gpu_vram_size`: Total GPU VRAM in **bytes**
/// - `gpu_vram_load`: GPU Usage of VRAM as a percentage (**0.0100.0**)
/// - `ram_load`: RAM usage as a percentage (**0.0100.0**)
/// - `ram_size`: Total RAM in **bytes**
/// - `disk_size`: Total disk size in **bytes**
/// - `disk_usage`: Used disk space in **bytes**
/// - `disk_temp`: Disk temperature in **degrees Celsius (°C)** (if available)
/// - `net_rx`: Network receive rate in **bytes per second (B/s)**
/// - `net_tx`: Network transmit rate in **bytes per second (B/s)**
#[derive(Serialize, Debug)]
pub struct MetricDto {
#[serde(rename = "serverId")]
pub server_id: i32,
pub server_id: u16,
#[serde(rename = "ipAddress")]
pub ip_address: String,
#[serde(rename = "cpu_Load")]
@@ -33,8 +72,8 @@ pub struct MetricDto {
pub gpu_temp: f64,
#[serde(rename = "gpu_Vram_Size")]
pub gpu_vram_size: f64,
#[serde(rename = "gpu_Vram_Usage")]
pub gpu_vram_usage: f64,
#[serde(rename = "gpu_Vram_Load")]
pub gpu_vram_load: f64,
#[serde(rename = "ram_Load")]
pub ram_load: f64,
#[serde(rename = "ram_Size")]
@@ -51,19 +90,55 @@ pub struct MetricDto {
pub net_tx: f64,
}
/// Detailed disk information for each detected disk.
///
/// ## Units
/// - `disk_total_space`: Total disk space in **bytes**
/// - `disk_available_space`: Available disk space in **bytes**
/// - `disk_used_space`: Used disk space in **bytes**
/// - `component_disk_temperature`: Disk temperature in **degrees Celsius (°C)**
#[derive(Serialize, Debug)]
pub struct DiskInfoDetailed {
pub disk_name: String,
pub disk_kind: String,
pub disk_total_space: f64,
pub disk_available_space: f64,
pub disk_used_space: f64,
pub disk_mount_point: String,
pub component_disk_label: String,
pub component_disk_temperature: f32,
}
/// Response containing server ID and IP address.
///
/// ## Units
/// - `id`: Unique server identifier (integer)
/// - `ip_address`: IPv4 or IPv6 address (string)
#[derive(Deserialize)]
pub struct IdResponse {
pub id: i32,
pub id: u16,
#[serde(rename = "ipAddress")]
pub ip_address: String,
}
/// Heartbeat message data sent to the backend server.
///
/// ## Units
/// - `ip_address`: IPv4 or IPv6 address (string)
#[derive(Serialize)]
pub struct HeartbeatDto {
#[serde(rename = "IpAddress")]
pub ip_address: String,
}
/// Hardware summary data for diagnostics and registration.
///
/// ## Units
/// - `cpu_type`: CPU model name (string)
/// - `cpu_cores`: Number of physical CPU cores (integer)
/// - `gpu_type`: GPU model name (string)
/// - `ram_size`: Total RAM size in **megabytes (MB)**
/// - `ip_address`: IPv4 or IPv6 address (string)
#[derive(Serialize, Debug)]
pub struct HardwareDto {
pub cpu_type: String,
@@ -72,3 +147,119 @@ pub struct HardwareDto {
pub ram_size: f64,
pub ip_address: String,
}
/// Command message received from the backend server.
///
/// ## Fields
/// - `message_type`: Type of command (e.g., "update_image", "restart_container", "stop_agent")
/// - `data`: Command payload (arbitrary JSON)
/// - `message_id`: Unique identifier for acknowledgment
#[derive(Debug, Deserialize, Clone)]
pub struct ServerMessage {
// Define your message structure here
pub message_type: String,
pub data: serde_json::Value,
pub message_id: String, // Add an ID for acknowledgment
}
/// Acknowledgment payload sent to the backend server for command messages.
///
/// ## Fields
/// - `message_id`: Unique identifier of the acknowledged message
/// - `status`: Status string ("success", "error", etc.)
/// - `details`: Additional details or error messages
#[derive(Debug, Serialize, Clone)]
pub struct Acknowledgment {
pub message_id: String,
pub status: String, // "success" or "error"
pub details: String,
}
/// Docker container information for agent and managed containers.
///
/// ## Fields
/// - `ID`: Container ID (first 12 hex digits, integer)
/// - `image`: Docker image name (string)
/// - `Name`: Container name (string)
/// - `Status`: Container status ("running", "stopped", etc.)
/// - `_net_in`: Network receive rate in **bytes per second (B/s)**
/// - `_net_out`: Network transmit rate in **bytes per second (B/s)**
/// - `_cpu_load`: CPU usage as a percentage (**0.0100.0**)
#[derive(Debug, Serialize, Clone)]
pub struct DockerRegistrationDto {
/// Unique server identifier (integer)
pub server_id: u16,
/// Number of currently running containers
// pub container_count: usize, --- IGNORE ---
/// json stringified array of DockerContainer
///
/// ## Json Example
/// json format: [{"id":"234dsf234","image":"nginx:latest","name":"webserver"},...]
///
/// ## Fields
/// id: unique container ID (first 12 hex digits)
/// image: docker image name
/// name: container name
pub containers: String, // Vec<DockerContainer>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerMetricDto {
pub server_id: u16,
/// json stringified array of DockerContainer
///
/// ## Json Example
/// json format: [{"id":"234dsf234","status":"running","image":"nginx:latest","name":"webserver","network":{"net_in":1024,"net_out":2048},"cpu":{"cpu_load":12.5},"ram":{"ram_load":10.0}},...]
///
/// ## Fields
/// id: unique container ID (first 12 hex digits)
/// status: "running";"stopped";others
/// image: docker image name
/// name: container name
/// network: network stats
/// cpu: cpu stats
/// ram: ram stats
pub containers: String, // Vec<DockerContainerInfo>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerCollectMetricDto {
pub id: String,
pub cpu: DockerContainerCpuDto,
pub ram: DockerContainerRamDto,
pub network: DockerContainerNetworkDto,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainerCpuDto {
pub cpu_load: Option<f64>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainerRamDto {
pub cpu_load: Option<f64>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainerNetworkDto {
pub net_in: Option<f64>,
pub net_out: Option<f64>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainerInfo {
pub container: Option<DockerContainer>,
pub status: Option<String>, // "running";"stopped";others
pub network: Option<stats::ContainerNetworkInfo>,
pub cpu: Option<stats::ContainerCpuInfo>,
pub ram: Option<stats::ContainerMemoryInfo>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainer {
pub id: String,
pub image: Option<String>,
pub name: Option<String>,
}

23
docker-compose.yaml Normal file
View File

@@ -0,0 +1,23 @@
watcher-agent:
image: git.triggermeelmo.com/donpat1to/watcher-agent:development
container_name: watcher-agent
restart: always
privileged: true # Grants full hardware access (use with caution)
env_file: .env
pid: "host"
volumes:
# Mount critical system paths for hardware monitoring
- /sys:/sys:ro # CPU/GPU temps, sensors
- /proc:/proc # Process/CPU stats
- /dev:/dev:ro # Disk/GPU device access
- /var/run/docker.sock:/var/run/docker.sock # Docker API access
- /:/root:ro # Access to for df-command
# Application volumes
- ./config:/app/config:ro
- ./logs:/app/logs
network_mode: host # Uses host network (for correct IP/interface detection)
healthcheck:
test: ["CMD", "/usr/local/bin/WatcherAgent", "healthcheck"]
interval: 30s
timeout: 3s
retries: 3