Compare commits

..

54 Commits

Author SHA1 Message Date
2a4cc4b2d5 added status in docker metric collect
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m9s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m10s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 4m0s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m12s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 4s
2025-10-29 22:23:58 +01:00
c36b17fa05 fixed json in api call 2025-10-29 21:35:54 +01:00
375b4450f0 fixed json formatting 2025-10-29 21:07:29 +01:00
b134be4c88 updated .env 2025-10-29 14:57:42 +01:00
6afd5d0fcd json as string 2025-10-29 14:26:56 +01:00
e02914516d removed db files
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m17s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m32s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 4m30s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m39s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 6s
2025-10-29 14:25:09 +01:00
bf90d3ceb9 added docker compose file 2025-10-29 14:24:26 +01:00
a8ccb0521a updated models to parse json better
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m19s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m28s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 4m26s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m31s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-29 12:11:30 +01:00
c90a276dca added error handling
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 3s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m8s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m4s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m55s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m11s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-28 11:20:12 +01:00
dc4c23f9d9 remoeved mut input attribute in broadcast docker container
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m5s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m43s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m26s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 3s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 1m58s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 4s
2025-10-27 23:28:23 +01:00
3182d57539 added documentation for broadcasting docker container 2025-10-27 23:25:30 +01:00
8c1ef7f9f6 removed unused imports
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m1s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m38s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m28s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m2s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 4s
2025-10-27 23:07:48 +01:00
16020eea50 added error handling in metrics handle 2025-10-27 23:03:49 +01:00
432a798210 updated models 2025-10-27 21:58:35 +01:00
a095444222 fixed stuck in loop
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m8s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m26s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m47s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m16s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-09 11:14:13 +02:00
5e7bc3df54 added debugging
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m12s
Rust Cross-Platform Build / Set Tag Name (push) Has been cancelled
Rust Cross-Platform Build / Build and Push Docker Image (push) Has been cancelled
Rust Cross-Platform Build / Create Tag (push) Has been cancelled
Rust Cross-Platform Build / Workflow Summary (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Has been cancelled
2025-10-09 11:10:53 +02:00
1c7a169956 added container broadcasting
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m14s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m18s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 4m4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m19s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-09 10:59:21 +02:00
c7bce926e9 added docker registration dto 2025-10-09 10:39:52 +02:00
711083daa0 fixed server message handle 2025-10-06 13:01:03 +02:00
06cec6ff9f added Option to data structs 2025-10-06 12:43:15 +02:00
a7cae5e93f added docker metrics
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m3s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m56s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m37s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m12s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-05 22:43:48 +02:00
66428863e6 moved stats into own folder
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Failing after 1m9s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Has been skipped
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Has been skipped
Rust Cross-Platform Build / Set Tag Name (push) Has been skipped
Rust Cross-Platform Build / Build and Push Docker Image (push) Has been skipped
Rust Cross-Platform Build / Create Tag (push) Has been skipped
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
2025-10-05 15:21:24 +02:00
b35cac0dbe changed Dtos and Docker structs 2025-10-04 22:05:28 +02:00
bb55b46c34 fixed image version detectionr
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m4s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m49s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m35s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m8s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-04 20:38:20 +02:00
76d54cb433 fixed image detection in client container
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m8s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m0s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m43s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m8s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-04 20:25:35 +02:00
4dc8c56a5c moved eprintln to println 2025-10-04 17:11:40 +02:00
c81040b16b moved github to gitea
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 3s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m2s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m45s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m36s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m2s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-04 13:22:14 +02:00
68c307e258 added debug print
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m6s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m52s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m40s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m13s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-04 11:57:06 +02:00
5d00f072d8 fixed syntax
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m38s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m20s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 2m36s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m3s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-04 01:21:03 +02:00
9072e253ec removed else handling from set-tag
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 6s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m44s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Has been cancelled
2025-10-04 00:56:57 +02:00
063ad113d7 added staging to branches for cicd 2025-10-04 00:55:43 +02:00
097be0f555 added staging to branches for cicd 2025-10-04 00:55:36 +02:00
c7b8e24a54 excluding builds on subbranches
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m26s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Has been cancelled
2025-10-04 00:50:00 +02:00
fb1d016b36 excluding builds on subbranches
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m36s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m21s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 2m38s
2025-10-04 00:42:03 +02:00
89d58e9696 moved set tag before docker build
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m2s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m44s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m26s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 3s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 3m15s
Rust Cross-Platform Build / Create Tag (push) Failing after 6s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
2025-10-04 00:31:30 +02:00
45c6e8180c added only tagging on main dev and staging 2025-10-04 00:23:52 +02:00
c395885bbd removed pull request
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m4s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m49s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m31s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
2025-10-04 00:01:13 +02:00
75494e2a71 changed api endpoints register and hardware-infor
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m16s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m19s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m44s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 3m6s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 4s
2025-10-03 23:53:34 +02:00
8e6e291f6a changed api endpoints register and hardware-infor 2025-10-03 23:53:26 +02:00
bfeb43f38a added docker management
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m2s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m18s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m25s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m0s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-03 20:42:35 +02:00
2f5e2391f7 removed useless if statemnt 2025-10-03 14:30:41 +02:00
820952089a removed useless if statemnt 2025-10-03 14:19:48 +02:00
c745125f20 check if checks are needed or needs is enough
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m4s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m47s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m30s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m20s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-03 13:47:00 +02:00
758fa7608f added toolcache
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m10s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m1s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m47s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m8s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-03 13:17:48 +02:00
ee6b947f29 update
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m23s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m54s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m35s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m4s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 4s
2025-10-03 12:22:12 +02:00
18dd1ef528 keine verdreckte doc 2025-10-02 00:38:23 +02:00
8fa7866cc2 creating directory for doc
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m3s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m42s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 4m22s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m8s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-01 22:57:06 +02:00
238ad87119 replaced documentation post with seperate step
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m5s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m47s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 4m20s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m4s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-01 22:42:35 +02:00
d584e21fd9 replaced documentation post with seperate step 2025-10-01 22:33:00 +02:00
ac79a2e0b7 added right path for cargo doc push 2025-10-01 22:21:59 +02:00
1798c1270b added right path for cargo doc push 2025-10-01 22:21:44 +02:00
97d1019b69 added right path for cargo doc push
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m5s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Failing after 2m50s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Failing after 3m31s
Rust Cross-Platform Build / Build and Push Docker Image (push) Has been skipped
Rust Cross-Platform Build / Create Tag (push) Has been skipped
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
2025-10-01 21:49:02 +02:00
e30ceb75d9 added cargo doc push
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m6s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Failing after 3m2s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Failing after 3m39s
Rust Cross-Platform Build / Build and Push Docker Image (push) Has been skipped
Rust Cross-Platform Build / Create Tag (push) Has been skipped
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
2025-10-01 19:49:48 +02:00
4681e0c694 fixed id type
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m7s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m56s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m40s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m11s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-01 19:00:34 +02:00
18 changed files with 1997 additions and 344 deletions

1
.env Normal file
View File

@@ -0,0 +1 @@
SERVER_URL=http://localhost:5000

View File

@@ -3,10 +3,8 @@ name: Rust Cross-Platform Build
on:
workflow_dispatch:
push:
branches: [ "development", "main", "feature/*", "bugfix/*", "enhancement/*" ]
branches: [ "development", "main", "staging" ]
tags: [ "v*.*.*" ]
pull_request:
branches: [ "development", "main" ]
env:
REGISTRY: git.triggermeelmo.com
@@ -76,44 +74,6 @@ jobs:
# working-directory: ${{ needs.detect-project.outputs.project-dir }}
# run: cargo clippy -- -D warnings
set-tag:
name: Set Tag Name
runs-on: ubuntu-latest
outputs:
tag_name: ${{ steps.set_tag.outputs.tag_name }}
steps:
- uses: actions/checkout@v4
- name: Determine next semantic version tag
id: set_tag
run: |
git fetch --tags
# Find latest tag matching vX.Y.Z
latest_tag=$(git tag --list 'v*.*.*' --sort=-v:refname | head -n 1)
if [[ -z "$latest_tag" ]]; then
major=0
minor=0
patch=0
else
version="${latest_tag#v}"
IFS='.' read -r major minor patch <<< "$version"
fi
if [[ "${GITHUB_REF}" == "refs/heads/main" ]]; then
major=$((major + 1))
minor=0
patch=0
elif [[ "${GITHUB_REF}" == "refs/heads/development" ]]; then
minor=$((minor + 1))
patch=0
else
patch=$((patch + 1))
fi
new_tag="v${major}.${minor}.${patch}"
echo "tag_name=${new_tag}" >> $GITHUB_OUTPUT
# audit:
# name: Security Audit
# needs: [detect-project]
@@ -189,13 +149,65 @@ jobs:
path: |
${{ needs.detect-project.outputs.project-dir }}/target/${{ matrix.target }}/release/${{ needs.detect-project.outputs.project-name }}${{ matrix.os == 'windows' && '.exe' || '' }}
set-tag:
name: Set Tag Name
needs: [detect-project, build]
#if: ${{ !failure() && !cancelled() && github.event_name != 'pull_request' }}
runs-on: ubuntu-latest
outputs:
tag_name: ${{ steps.set_tag.outputs.tag_name }}
should_tag: ${{ steps.set_tag.outputs.should_tag }}
steps:
- uses: actions/checkout@v4
- name: Determine next semantic version tag
id: set_tag
run: |
git fetch --tags
# Find latest tag matching vX.Y.Z
latest_tag=$(git tag --list 'v*.*.*' --sort=-v:refname | head -n 1)
if [[ -z "$latest_tag" ]]; then
major=0
minor=0
patch=0
else
version="${latest_tag#v}"
IFS='.' read -r major minor patch <<< "$version"
fi
if [[ "${GITHUB_REF}" == "refs/heads/main" ]]; then
major=$((major + 1))
minor=0
patch=0
new_tag="v${major}.${minor}.${patch}"
echo "tag_name=${new_tag}" >> $GITHUB_OUTPUT
echo "should_tag=true" >> $GITHUB_OUTPUT
echo "Creating new major version tag: ${new_tag}"
elif [[ "${GITHUB_REF}" == "refs/heads/development" ]]; then
minor=$((minor + 1))
patch=0
new_tag="v${major}.${minor}.${patch}"
echo "tag_name=${new_tag}" >> $GITHUB_OUTPUT
echo "should_tag=true" >> $GITHUB_OUTPUT
echo "Creating new minor version tag: ${new_tag}"
elif [[ "${GITHUB_REF}" == "refs/heads/staging" ]]; then
patch=$((patch + 1))
new_tag="v${major}.${minor}.${patch}"
echo "tag_name=${new_tag}" >> $GITHUB_OUTPUT
echo "should_tag=true" >> $GITHUB_OUTPUT
echo "Creating new patch version tag: ${new_tag}"
fi
docker-build:
name: Build and Push Docker Image
needs: [detect-project, build, set-tag]
if: |
always() &&
needs.detect-project.result == 'success' &&
needs.build.result == 'success' &&
needs.set-tag.outputs.should_tag == 'true' &&
github.event_name != 'pull_request'
runs-on: ubuntu-latest
environment: production
@@ -233,9 +245,6 @@ jobs:
tag:
name: Create Tag
needs: [docker-build, build, set-tag]
if: |
github.event_name == 'push' &&
needs.docker-build.result == 'success'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
@@ -251,13 +260,13 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
echo "Creating new tag: ${{ needs.set-tag.outputs.tag_name }}"
git tag ${{ needs.set-tag.outputs.tag_name }}
git push origin ${{ needs.set-tag.outputs.tag_name }}
summary:
name: Workflow Summary
needs: [test, audit, build, docker-build]
needs: [test, build, docker-build]
if: always()
runs-on: ubuntu-latest
steps:

3
.gitignore vendored
View File

@@ -17,6 +17,9 @@ Cargo.lock
# MSVC Windows builds of rustc generate these, which store debugging information
*.pdb
.env
watcher-volumes
# RustRover
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore

578
WatcherAgent/- Normal file
View File

@@ -0,0 +1,578 @@
#!/bin/sh
#
# This script should be run via curl:
# sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
# or via wget:
# sh -c "$(wget -qO- https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
# or via fetch:
# sh -c "$(fetch -o - https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
#
# As an alternative, you can first download the install script and run it afterwards:
# wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh
# sh install.sh
#
# You can tweak the install behavior by setting variables when running the script. For
# example, to change the path to the Oh My Zsh repository:
# ZSH=~/.zsh sh install.sh
#
# Respects the following environment variables:
# ZDOTDIR - path to Zsh dotfiles directory (default: unset). See [1][2]
# [1] https://zsh.sourceforge.io/Doc/Release/Parameters.html#index-ZDOTDIR
# [2] https://zsh.sourceforge.io/Doc/Release/Files.html#index-ZDOTDIR_002c-use-of
# ZSH - path to the Oh My Zsh repository folder (default: $HOME/.oh-my-zsh)
# REPO - name of the GitHub repo to install from (default: ohmyzsh/ohmyzsh)
# REMOTE - full remote URL of the git repo to install (default: GitHub via HTTPS)
# BRANCH - branch to check out immediately after install (default: master)
#
# Other options:
# CHSH - 'no' means the installer will not change the default shell (default: yes)
# RUNZSH - 'no' means the installer will not run zsh after the install (default: yes)
# KEEP_ZSHRC - 'yes' means the installer will not replace an existing .zshrc (default: no)
# OVERWRITE_CONFIRMATION - 'no' means the installer will not ask for confirmation to overwrite the existing .zshrc (default: yes)
#
# You can also pass some arguments to the install script to set some these options:
# --skip-chsh: has the same behavior as setting CHSH to 'no'
# --unattended: sets both CHSH and RUNZSH to 'no'
# --keep-zshrc: sets KEEP_ZSHRC to 'yes'
# For example:
# sh install.sh --unattended
# or:
# sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" "" --unattended
#
set -e
# Make sure important variables exist if not already defined
#
# $USER is defined by login(1) which is not always executed (e.g. containers)
# POSIX: https://pubs.opengroup.org/onlinepubs/009695299/utilities/id.html
USER=${USER:-$(id -u -n)}
# $HOME is defined at the time of login, but it could be unset. If it is unset,
# a tilde by itself (~) will not be expanded to the current user's home directory.
# POSIX: https://pubs.opengroup.org/onlinepubs/009696899/basedefs/xbd_chap08.html#tag_08_03
HOME="${HOME:-$(getent passwd $USER 2>/dev/null | cut -d: -f6)}"
# macOS does not have getent, but this works even if $HOME is unset
HOME="${HOME:-$(eval echo ~$USER)}"
# Track if $ZSH was provided
custom_zsh=${ZSH:+yes}
# Use $zdot to keep track of where the directory is for zsh dotfiles
# To check if $ZDOTDIR was provided, explicitly check for $ZDOTDIR
zdot="${ZDOTDIR:-$HOME}"
# Default value for $ZSH
# a) if $ZDOTDIR is supplied and not $HOME: $ZDOTDIR/ohmyzsh
# b) otherwise, $HOME/.oh-my-zsh
if [ -n "$ZDOTDIR" ] && [ "$ZDOTDIR" != "$HOME" ]; then
ZSH="${ZSH:-$ZDOTDIR/ohmyzsh}"
fi
ZSH="${ZSH:-$HOME/.oh-my-zsh}"
# Default settings
REPO=${REPO:-ohmyzsh/ohmyzsh}
REMOTE=${REMOTE:-https://github.com/${REPO}.git}
BRANCH=${BRANCH:-master}
# Other options
CHSH=${CHSH:-yes}
RUNZSH=${RUNZSH:-yes}
KEEP_ZSHRC=${KEEP_ZSHRC:-no}
OVERWRITE_CONFIRMATION=${OVERWRITE_CONFIRMATION:-yes}
command_exists() {
command -v "$@" >/dev/null 2>&1
}
user_can_sudo() {
# Check if sudo is installed
command_exists sudo || return 1
# Termux can't run sudo, so we can detect it and exit the function early.
case "$PREFIX" in
*com.termux*) return 1 ;;
esac
# The following command has 3 parts:
#
# 1. Run `sudo` with `-v`. Does the following:
# • with privilege: asks for a password immediately.
# • without privilege: exits with error code 1 and prints the message:
# Sorry, user <username> may not run sudo on <hostname>
#
# 2. Pass `-n` to `sudo` to tell it to not ask for a password. If the
# password is not required, the command will finish with exit code 0.
# If one is required, sudo will exit with error code 1 and print the
# message:
# sudo: a password is required
#
# 3. Check for the words "may not run sudo" in the output to really tell
# whether the user has privileges or not. For that we have to make sure
# to run `sudo` in the default locale (with `LANG=`) so that the message
# stays consistent regardless of the user's locale.
#
! LANG= sudo -n -v 2>&1 | grep -q "may not run sudo"
}
# The [ -t 1 ] check only works when the function is not called from
# a subshell (like in `$(...)` or `(...)`, so this hack redefines the
# function at the top level to always return false when stdout is not
# a tty.
if [ -t 1 ]; then
is_tty() {
true
}
else
is_tty() {
false
}
fi
# This function uses the logic from supports-hyperlinks[1][2], which is
# made by Kat Marchán (@zkat) and licensed under the Apache License 2.0.
# [1] https://github.com/zkat/supports-hyperlinks
# [2] https://crates.io/crates/supports-hyperlinks
#
# Copyright (c) 2021 Kat Marchán
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
supports_hyperlinks() {
# $FORCE_HYPERLINK must be set and be non-zero (this acts as a logic bypass)
if [ -n "$FORCE_HYPERLINK" ]; then
[ "$FORCE_HYPERLINK" != 0 ]
return $?
fi
# If stdout is not a tty, it doesn't support hyperlinks
is_tty || return 1
# DomTerm terminal emulator (domterm.org)
if [ -n "$DOMTERM" ]; then
return 0
fi
# VTE-based terminals above v0.50 (Gnome Terminal, Guake, ROXTerm, etc)
if [ -n "$VTE_VERSION" ]; then
[ $VTE_VERSION -ge 5000 ]
return $?
fi
# If $TERM_PROGRAM is set, these terminals support hyperlinks
case "$TERM_PROGRAM" in
Hyper|iTerm.app|terminology|WezTerm|vscode) return 0 ;;
esac
# These termcap entries support hyperlinks
case "$TERM" in
xterm-kitty|alacritty|alacritty-direct) return 0 ;;
esac
# xfce4-terminal supports hyperlinks
if [ "$COLORTERM" = "xfce4-terminal" ]; then
return 0
fi
# Windows Terminal also supports hyperlinks
if [ -n "$WT_SESSION" ]; then
return 0
fi
# Konsole supports hyperlinks, but it's an opt-in setting that can't be detected
# https://github.com/ohmyzsh/ohmyzsh/issues/10964
# if [ -n "$KONSOLE_VERSION" ]; then
# return 0
# fi
return 1
}
# Adapted from code and information by Anton Kochkov (@XVilka)
# Source: https://gist.github.com/XVilka/8346728
supports_truecolor() {
case "$COLORTERM" in
truecolor|24bit) return 0 ;;
esac
case "$TERM" in
iterm |\
tmux-truecolor |\
linux-truecolor |\
xterm-truecolor |\
screen-truecolor) return 0 ;;
esac
return 1
}
fmt_link() {
# $1: text, $2: url, $3: fallback mode
if supports_hyperlinks; then
printf '\033]8;;%s\033\\%s\033]8;;\033\\\n' "$2" "$1"
return
fi
case "$3" in
--text) printf '%s\n' "$1" ;;
--url|*) fmt_underline "$2" ;;
esac
}
fmt_underline() {
is_tty && printf '\033[4m%s\033[24m\n' "$*" || printf '%s\n' "$*"
}
# shellcheck disable=SC2016 # backtick in single-quote
fmt_code() {
is_tty && printf '`\033[2m%s\033[22m`\n' "$*" || printf '`%s`\n' "$*"
}
fmt_error() {
printf '%sError: %s%s\n' "${FMT_BOLD}${FMT_RED}" "$*" "$FMT_RESET" >&2
}
setup_color() {
# Only use colors if connected to a terminal
if ! is_tty; then
FMT_RAINBOW=""
FMT_RED=""
FMT_GREEN=""
FMT_YELLOW=""
FMT_BLUE=""
FMT_BOLD=""
FMT_RESET=""
return
fi
if supports_truecolor; then
FMT_RAINBOW="
$(printf '\033[38;2;255;0;0m')
$(printf '\033[38;2;255;97;0m')
$(printf '\033[38;2;247;255;0m')
$(printf '\033[38;2;0;255;30m')
$(printf '\033[38;2;77;0;255m')
$(printf '\033[38;2;168;0;255m')
$(printf '\033[38;2;245;0;172m')
"
else
FMT_RAINBOW="
$(printf '\033[38;5;196m')
$(printf '\033[38;5;202m')
$(printf '\033[38;5;226m')
$(printf '\033[38;5;082m')
$(printf '\033[38;5;021m')
$(printf '\033[38;5;093m')
$(printf '\033[38;5;163m')
"
fi
FMT_RED=$(printf '\033[31m')
FMT_GREEN=$(printf '\033[32m')
FMT_YELLOW=$(printf '\033[33m')
FMT_BLUE=$(printf '\033[34m')
FMT_BOLD=$(printf '\033[1m')
FMT_RESET=$(printf '\033[0m')
}
setup_ohmyzsh() {
# Prevent the cloned repository from having insecure permissions. Failing to do
# so causes compinit() calls to fail with "command not found: compdef" errors
# for users with insecure umasks (e.g., "002", allowing group writability). Note
# that this will be ignored under Cygwin by default, as Windows ACLs take
# precedence over umasks except for filesystems mounted with option "noacl".
umask g-w,o-w
echo "${FMT_BLUE}Cloning Oh My Zsh...${FMT_RESET}"
command_exists git || {
fmt_error "git is not installed"
exit 1
}
ostype=$(uname)
if [ -z "${ostype%CYGWIN*}" ] && git --version | grep -Eq 'msysgit|windows'; then
fmt_error "Windows/MSYS Git is not supported on Cygwin"
fmt_error "Make sure the Cygwin git package is installed and is first on the \$PATH"
exit 1
fi
# Manual clone with git config options to support git < v1.7.2
git init --quiet "$ZSH" && cd "$ZSH" \
&& git config core.eol lf \
&& git config core.autocrlf false \
&& git config fsck.zeroPaddedFilemode ignore \
&& git config fetch.fsck.zeroPaddedFilemode ignore \
&& git config receive.fsck.zeroPaddedFilemode ignore \
&& git config oh-my-zsh.remote origin \
&& git config oh-my-zsh.branch "$BRANCH" \
&& git remote add origin "$REMOTE" \
&& git fetch --depth=1 origin \
&& git checkout -b "$BRANCH" "origin/$BRANCH" || {
[ ! -d "$ZSH" ] || {
cd -
rm -rf "$ZSH" 2>/dev/null
}
fmt_error "git clone of oh-my-zsh repo failed"
exit 1
}
# Exit installation directory
cd -
echo
}
setup_zshrc() {
# Keep most recent old .zshrc at .zshrc.pre-oh-my-zsh, and older ones
# with datestamp of installation that moved them aside, so we never actually
# destroy a user's original zshrc
echo "${FMT_BLUE}Looking for an existing zsh config...${FMT_RESET}"
# Must use this exact name so uninstall.sh can find it
OLD_ZSHRC="$zdot/.zshrc.pre-oh-my-zsh"
if [ -f "$zdot/.zshrc" ] || [ -h "$zdot/.zshrc" ]; then
# Skip this if the user doesn't want to replace an existing .zshrc
if [ "$KEEP_ZSHRC" = yes ]; then
echo "${FMT_YELLOW}Found ${zdot}/.zshrc.${FMT_RESET} ${FMT_GREEN}Keeping...${FMT_RESET}"
return
fi
if [ $OVERWRITE_CONFIRMATION != "no" ]; then
# Ask user for confirmation before backing up and overwriting
echo "${FMT_YELLOW}Found ${zdot}/.zshrc."
echo "The existing .zshrc will be backed up to .zshrc.pre-oh-my-zsh if overwritten."
echo "Make sure your .zshrc contains the following minimal configuration if you choose not to overwrite it:${FMT_RESET}"
echo "----------------------------------------"
cat "$ZSH/templates/minimal.zshrc"
echo "----------------------------------------"
printf '%sDo you want to overwrite it with the Oh My Zsh template? [Y/n]%s ' \
"$FMT_YELLOW" "$FMT_RESET"
read -r opt
case $opt in
[Yy]*|"") ;;
[Nn]*) echo "Overwrite skipped. Existing .zshrc will be kept."; return ;;
*) echo "Invalid choice. Overwrite skipped. Existing .zshrc will be kept."; return ;;
esac
fi
if [ -e "$OLD_ZSHRC" ]; then
OLD_OLD_ZSHRC="${OLD_ZSHRC}-$(date +%Y-%m-%d_%H-%M-%S)"
if [ -e "$OLD_OLD_ZSHRC" ]; then
fmt_error "$OLD_OLD_ZSHRC exists. Can't back up ${OLD_ZSHRC}"
fmt_error "re-run the installer again in a couple of seconds"
exit 1
fi
mv "$OLD_ZSHRC" "${OLD_OLD_ZSHRC}"
echo "${FMT_YELLOW}Found old .zshrc.pre-oh-my-zsh." \
"${FMT_GREEN}Backing up to ${OLD_OLD_ZSHRC}${FMT_RESET}"
fi
echo "${FMT_GREEN}Backing up to ${OLD_ZSHRC}${FMT_RESET}"
mv "$zdot/.zshrc" "$OLD_ZSHRC"
fi
echo "${FMT_GREEN}Using the Oh My Zsh template file and adding it to $zdot/.zshrc.${FMT_RESET}"
# Modify $ZSH variable in .zshrc directory to use the literal $ZDOTDIR or $HOME
omz="$ZSH"
if [ -n "$ZDOTDIR" ] && [ "$ZDOTDIR" != "$HOME" ]; then
omz=$(echo "$omz" | sed "s|^$ZDOTDIR/|\$ZDOTDIR/|")
fi
omz=$(echo "$omz" | sed "s|^$HOME/|\$HOME/|")
sed "s|^export ZSH=.*$|export ZSH=\"${omz}\"|" "$ZSH/templates/zshrc.zsh-template" > "$zdot/.zshrc-omztemp"
mv -f "$zdot/.zshrc-omztemp" "$zdot/.zshrc"
echo
}
setup_shell() {
# Skip setup if the user wants or stdin is closed (not running interactively).
if [ "$CHSH" = no ]; then
return
fi
# If this user's login shell is already "zsh", do not attempt to switch.
if [ "$(basename -- "$SHELL")" = "zsh" ]; then
return
fi
# If this platform doesn't provide a "chsh" command, bail out.
if ! command_exists chsh; then
cat <<EOF
I can't change your shell automatically because this system does not have chsh.
${FMT_BLUE}Please manually change your default shell to zsh${FMT_RESET}
EOF
return
fi
echo "${FMT_BLUE}Time to change your default shell to zsh:${FMT_RESET}"
# Prompt for user choice on changing the default login shell
printf '%sDo you want to change your default shell to zsh? [Y/n]%s ' \
"$FMT_YELLOW" "$FMT_RESET"
read -r opt
case $opt in
[Yy]*|"") ;;
[Nn]*) echo "Shell change skipped."; return ;;
*) echo "Invalid choice. Shell change skipped."; return ;;
esac
# Check if we're running on Termux
case "$PREFIX" in
*com.termux*) termux=true; zsh=zsh ;;
*) termux=false ;;
esac
if [ "$termux" != true ]; then
# Test for the right location of the "shells" file
if [ -f /etc/shells ]; then
shells_file=/etc/shells
elif [ -f /usr/share/defaults/etc/shells ]; then # Solus OS
shells_file=/usr/share/defaults/etc/shells
else
fmt_error "could not find /etc/shells file. Change your default shell manually."
return
fi
# Get the path to the right zsh binary
# 1. Use the most preceding one based on $PATH, then check that it's in the shells file
# 2. If that fails, get a zsh path from the shells file, then check it actually exists
if ! zsh=$(command -v zsh) || ! grep -qx "$zsh" "$shells_file"; then
if ! zsh=$(grep '^/.*/zsh$' "$shells_file" | tail -n 1) || [ ! -f "$zsh" ]; then
fmt_error "no zsh binary found or not present in '$shells_file'"
fmt_error "change your default shell manually."
return
fi
fi
fi
# We're going to change the default shell, so back up the current one
if [ -n "$SHELL" ]; then
echo "$SHELL" > "$zdot/.shell.pre-oh-my-zsh"
else
grep "^$USER:" /etc/passwd | awk -F: '{print $7}' > "$zdot/.shell.pre-oh-my-zsh"
fi
echo "Changing your shell to $zsh..."
# Check if user has sudo privileges to run `chsh` with or without `sudo`
#
# This allows the call to succeed without password on systems where the
# user does not have a password but does have sudo privileges, like in
# Google Cloud Shell.
#
# On systems that don't have a user with passwordless sudo, the user will
# be prompted for the password either way, so this shouldn't cause any issues.
#
if user_can_sudo; then
sudo -k chsh -s "$zsh" "$USER" # -k forces the password prompt
else
chsh -s "$zsh" "$USER" # run chsh normally
fi
# Check if the shell change was successful
if [ $? -ne 0 ]; then
fmt_error "chsh command unsuccessful. Change your default shell manually."
else
export SHELL="$zsh"
echo "${FMT_GREEN}Shell successfully changed to '$zsh'.${FMT_RESET}"
fi
echo
}
# shellcheck disable=SC2183 # printf string has more %s than arguments ($FMT_RAINBOW expands to multiple arguments)
print_success() {
printf '%s %s__ %s %s %s %s %s__ %s\n' $FMT_RAINBOW $FMT_RESET
printf '%s ____ %s/ /_ %s ____ ___ %s__ __ %s ____ %s_____%s/ /_ %s\n' $FMT_RAINBOW $FMT_RESET
printf '%s / __ \\%s/ __ \\ %s / __ `__ \\%s/ / / / %s /_ / %s/ ___/%s __ \\ %s\n' $FMT_RAINBOW $FMT_RESET
printf '%s/ /_/ /%s / / / %s / / / / / /%s /_/ / %s / /_%s(__ )%s / / / %s\n' $FMT_RAINBOW $FMT_RESET
printf '%s\\____/%s_/ /_/ %s /_/ /_/ /_/%s\\__, / %s /___/%s____/%s_/ /_/ %s\n' $FMT_RAINBOW $FMT_RESET
printf '%s %s %s %s /____/ %s %s %s %s....is now installed!%s\n' $FMT_RAINBOW $FMT_GREEN $FMT_RESET
printf '\n'
printf '\n'
printf "%s %s %s\n" "Before you scream ${FMT_BOLD}${FMT_YELLOW}Oh My Zsh!${FMT_RESET} look over the" \
"$(fmt_code "$(fmt_link ".zshrc" "file://$zdot/.zshrc" --text)")" \
"file to select plugins, themes, and options."
printf '\n'
printf '%s\n' "• Follow us on X: $(fmt_link @ohmyzsh https://x.com/ohmyzsh)"
printf '%s\n' "• Join our Discord community: $(fmt_link "Discord server" https://discord.gg/ohmyzsh)"
printf '%s\n' "• Get stickers, t-shirts, coffee mugs and more: $(fmt_link "Planet Argon Shop" https://shop.planetargon.com/collections/oh-my-zsh)"
printf '%s\n' $FMT_RESET
}
main() {
# Run as unattended if stdin is not a tty
if [ ! -t 0 ]; then
RUNZSH=no
CHSH=no
OVERWRITE_CONFIRMATION=no
fi
# Parse arguments
while [ $# -gt 0 ]; do
case $1 in
--unattended) RUNZSH=no; CHSH=no; OVERWRITE_CONFIRMATION=no ;;
--skip-chsh) CHSH=no ;;
--keep-zshrc) KEEP_ZSHRC=yes ;;
esac
shift
done
setup_color
if ! command_exists zsh; then
echo "${FMT_YELLOW}Zsh is not installed.${FMT_RESET} Please install zsh first."
exit 1
fi
if [ -d "$ZSH" ]; then
echo "${FMT_YELLOW}The \$ZSH folder already exists ($ZSH).${FMT_RESET}"
if [ "$custom_zsh" = yes ]; then
cat <<EOF
You ran the installer with the \$ZSH setting or the \$ZSH variable is
exported. You have 3 options:
1. Unset the ZSH variable when calling the installer:
$(fmt_code "ZSH= sh install.sh")
2. Install Oh My Zsh to a directory that doesn't exist yet:
$(fmt_code "ZSH=path/to/new/ohmyzsh/folder sh install.sh")
3. (Caution) If the folder doesn't contain important information,
you can just remove it with $(fmt_code "rm -r $ZSH")
EOF
else
echo "You'll need to remove it if you want to reinstall."
fi
exit 1
fi
# Create ZDOTDIR folder structure if it doesn't exist
if [ -n "$ZDOTDIR" ]; then
mkdir -p "$ZDOTDIR"
fi
setup_ohmyzsh
setup_zshrc
setup_shell
print_success
if [ $RUNZSH = no ]; then
echo "${FMT_YELLOW}Run zsh to try it out.${FMT_RESET}"
exit
fi
exec zsh -l
}
main "$@"

View File

@@ -1,5 +1,3 @@
/// # API Module
///
/// This module provides all HTTP communication between WatcherAgent and the backend server.
@@ -14,9 +12,12 @@
/// These functions are called from the main agent loop and background tasks. All network operations are asynchronous and robust to transient failures.
use std::time::Duration;
use crate::hardware::HardwareInfo;
use crate::models::{HeartbeatDto, IdResponse, MetricDto, RegistrationDto, ServerMessage, Acknowledgment};
use crate::docker::serverclientcomm::handle_server_message;
use crate::hardware::HardwareInfo;
use crate::models::{
Acknowledgment, DockerMetricDto, DockerRegistrationDto, HeartbeatDto,
IdResponse, MetricDto, RegistrationDto, ServerMessage,
};
use anyhow::Result;
use reqwest::{Client, StatusCode};
@@ -39,7 +40,7 @@ use bollard::Docker;
/// Returns an error if unable to register after repeated attempts.
pub async fn register_with_server(
base_url: &str,
) -> Result<(i32, String), Box<dyn Error + Send + Sync>> {
) -> Result<(u16, String), Box<dyn Error + Send + Sync>> {
// First get local IP
let ip = local_ip_address::local_ip()?.to_string();
println!("Local IP address detected: {}", ip);
@@ -57,7 +58,7 @@ pub async fn register_with_server(
// Prepare registration data
let registration = RegistrationDto {
id: server_id,
server_id: server_id,
ip_address: registered_ip.clone(),
cpu_type: hardware.cpu.name.clone().unwrap_or_default(),
cpu_cores: (hardware.cpu.cores).unwrap_or_default(),
@@ -68,7 +69,7 @@ pub async fn register_with_server(
// Try to register (will retry on failure)
loop {
println!("Attempting to register with server...");
let url = format!("{}/monitoring/register-agent-by-id", base_url);
let url = format!("{}/monitoring/hardware-info", base_url);
match client.post(&url).json(&registration).send().await {
Ok(resp) if resp.status().is_success() => {
println!("✅ Successfully registered with server.");
@@ -103,12 +104,12 @@ pub async fn register_with_server(
async fn get_server_id_by_ip(
base_url: &str,
ip: &str,
) -> Result<(i32, String), Box<dyn Error + Send + Sync>> {
) -> Result<(u16, String), Box<dyn Error + Send + Sync>> {
let client = Client::builder()
.danger_accept_invalid_certs(true)
.build()?;
let url = format!("{}/monitoring/server-id-by-ip?ipAddress={}", base_url, ip);
let url = format!("{}/monitoring/register?ipAddress={}", base_url, ip);
loop {
println!("Attempting to fetch server ID for IP {}...", ip);
@@ -151,6 +152,89 @@ async fn get_server_id_by_ip(
}
}
/// Broadcasts Docker container information to the monitoring server for service discovery.
///
/// This function sends the current Docker container configuration to the server
/// to register available containers and enable service monitoring. It will
/// continuously retry until successful, making it suitable for initial
/// registration scenarios.
///
/// # Arguments
///
/// * `base_url` - The base URL of the monitoring server API (e.g., "https://monitoring.example.com")
/// * `server_id` - The ID of the server to associate the containers with
/// * `container_dto` - Mutable reference to Docker container information for broadcast
///
/// # Returns
///
/// * `Ok(())` - When container information is successfully broadcasted to the server
/// * `Err(Box<dyn Error + Send + Sync>)` - If an unrecoverable error occurs (though the function typically retries on transient failures)
///
/// # Behavior
///
/// This function operates in a retry loop with the following characteristics:
///
/// - **Retry Logic**: Attempts broadcast every 10 seconds until successful
/// - **Mutation**: Modifies the `container_dto` to set the `server_id` before sending
/// - **TLS**: Accepts invalid TLS certificates for development environments
/// - **Logging**: Provides detailed console output about broadcast attempts and results
///
/// # Errors
///
/// This function may return an error in the following cases:
///
/// * **HTTP Client Creation**: Failed to create HTTP client with TLS configuration
/// * **Network Issues**: Persistent connection failures to the backend server
/// * **Server Errors**: Backend returns non-success HTTP status codes repeatedly
/// * **JSON Serialization**: Cannot serialize container data (should be rare with proper DTOs)
pub async fn broadcast_docker_containers(
base_url: &str,
server_id: u16,
container_dto: &DockerRegistrationDto,
) -> Result<(), Box<dyn Error + Send + Sync>> {
// First get local IP
println!("Preparing to broadcast docker containers...");
// Create HTTP client for registration
let client = Client::builder()
.danger_accept_invalid_certs(true)
.build()?;
// Prepare registration data
let mut broadcast_data = container_dto.clone();
broadcast_data.server_id = server_id;
// Try to register (will retry on failure)
loop {
println!("Attempting to broadcast containers...");
let json_body = serde_json::to_string_pretty(&broadcast_data)?;
println!("📤 JSON being posted:\n{}", json_body);
let url = format!("{}/monitoring/service-discovery", base_url);
match client.post(&url).json(&container_dto).send().await {
Ok(resp) if resp.status().is_success() => {
println!(
"✅ Successfully broadcasted following docker container: {:?}",
container_dto
);
return Ok(());
}
Ok(resp) => {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
println!(
"⚠️ Broadcasting failed ({}): {} (will retry in 10 seconds)",
status, text
);
}
Err(err) => {
println!("⚠️ Broadcasting failed: {} (will retry in 10 seconds)", err);
}
}
sleep(Duration::from_secs(10)).await;
}
}
/// Periodically sends heartbeat signals to the backend server to indicate agent liveness.
///
/// This function runs in a background task and will retry on network errors.
@@ -224,7 +308,10 @@ pub async fn send_metrics(
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if commands are handled successfully.
pub async fn listening_to_server(docker: &Docker, base_url: &str) -> Result<(), Box<dyn Error + Send + Sync>> {
pub async fn listening_to_server(
docker: &Docker,
base_url: &str,
) -> Result<(), Box<dyn Error + Send + Sync>> {
let url = format!("{}/api/message", base_url);
let client = reqwest::Client::new();
@@ -238,7 +325,15 @@ pub async fn listening_to_server(docker: &Docker, base_url: &str) -> Result<(),
match response.json::<ServerMessage>().await {
Ok(msg) => {
// Acknowledge receipt immediately
if let Err(e) = send_acknowledgment(&client, base_url, &msg.message_id, "received", "Message received successfully").await {
if let Err(e) = send_acknowledgment(
&client,
base_url,
&msg.message_id,
"received",
"Message received successfully",
)
.await
{
eprintln!("Failed to send receipt acknowledgment: {}", e);
}
@@ -251,7 +346,15 @@ pub async fn listening_to_server(docker: &Docker, base_url: &str) -> Result<(),
Err(e) => ("error", format!("Execution failed: {}", e)),
};
if let Err(e) = send_acknowledgment(&client, base_url, &msg.message_id, status, &details).await {
if let Err(e) = send_acknowledgment(
&client,
base_url,
&msg.message_id,
status,
&details,
)
.await
{
eprintln!("Failed to send execution acknowledgment: {}", e);
}
}
@@ -289,7 +392,7 @@ pub async fn listening_to_server(docker: &Docker, base_url: &str) -> Result<(),
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if acknowledgment is sent successfully.
async fn send_acknowledgment(
pub async fn send_acknowledgment(
client: &reqwest::Client,
base_url: &str,
message_id: &str,
@@ -304,16 +407,39 @@ async fn send_acknowledgment(
details: details.to_string(),
};
let response = client
.post(&ack_url)
.json(&acknowledgment)
.send()
.await?;
let response = client.post(&ack_url).json(&acknowledgment).send().await?;
if response.status().is_success() {
println!("Acknowledgment sent successfully for message {}", message_id);
println!(
"Acknowledgment sent successfully for message {}",
message_id
);
} else {
eprintln!("Server returned error for acknowledgment: {}", response.status());
eprintln!(
"Server returned error for acknowledgment: {}",
response.status()
);
}
Ok(())
}
pub async fn send_docker_metrics(
base_url: &str,
docker_metrics: &DockerMetricDto,
) -> Result<(), Box<dyn Error + Send + Sync>> {
let client = Client::new();
let url = format!("{}/monitoring/docker-metric", base_url);
println!("Docker Metrics: {}", serde_json::to_string_pretty(&docker_metrics)?);
match client.post(&url).json(&docker_metrics).send().await {
Ok(res) => println!(
"✅ Sent docker metrics for server {} | Status: {}",
docker_metrics.server_id,
res.status()
),
Err(err) => eprintln!("❌ Failed to send docker metrics: {}", err),
}
Ok(())

View File

@@ -1,15 +1,17 @@
//! Docker container utilities for WatcherAgent
//!
//! Provides functions to list and process Docker containers using the Bollard library.
//!
use crate::docker::stats;
use crate::docker::stats::{ContainerCpuInfo, ContainerNetworkInfo};
use crate::models::DockerContainer;
use bollard::query_parameters::{ListContainersOptions};
use bollard::query_parameters::{
CreateImageOptions, ListContainersOptions, RestartContainerOptions,
};
use bollard::Docker;
use futures_util::StreamExt;
use std::error::Error;
/// Returns a list of available Docker containers.
///
@@ -18,7 +20,7 @@ use bollard::Docker;
///
/// # Returns
/// * `Vec<DockerContainer>` - Vector of Docker container info.
pub async fn get_available_container(docker: &Docker) -> Vec<DockerContainer> {
pub async fn get_available_containers(docker: &Docker) -> Vec<DockerContainer> {
println!("=== DOCKER CONTAINER LIST ===");
let options = Some(ListContainersOptions {
@@ -29,43 +31,30 @@ pub async fn get_available_container(docker: &Docker) -> Vec<DockerContainer> {
let containers_list = match docker.list_containers(options).await {
Ok(containers) => {
println!("Available containers ({}):", containers.len());
containers.into_iter()
containers
.into_iter()
.filter_map(|container| {
container.id.as_ref()?; // Skip if no ID
let id = container.id?;
let short_string_id = if id.len() > 12 { &id[..12] } else { &id };
let short_id: u32 = short_string_id.trim().parse().unwrap();
let short_id = if id.len() > 12 { &id[..12] } else { &id };
let name = container.names
let name = container
.names
.and_then(|names| names.into_iter().next())
.map(|name| name.trim_start_matches('/').to_string())
.unwrap_or_else(|| "unknown".to_string());
let image = container.image
let image = container
.image
.as_ref()
.map(|img| img.to_string())
.unwrap_or_else(|| "unknown".to_string());
let status = container.status
.as_ref()
.map(|s| match s.to_lowercase().as_str() {
s if s.contains("up") || s.contains("running") => "running".to_string(),
s if s.contains("exited") || s.contains("stopped") => "stopped".to_string(),
_ => s.to_string(),
})
.unwrap_or_else(|| "unknown".to_string());
println!(" - ID: {}, Image: {:?}, Name: {}", short_id, container.image, name);
Some(DockerContainer {
ID: short_id,
image,
Name: name,
Status: status,
_net_in: 0.0,
_net_out: 0.0,
_cpu_load: 0.0,
id: short_id.to_string(),
image: Some(image),
name: Some(name),
})
})
.collect()
@@ -79,6 +68,92 @@ pub async fn get_available_container(docker: &Docker) -> Vec<DockerContainer> {
containers_list
}
/// Pulls a new Docker image and restarts the current container.
///
/// # Arguments
/// * `docker` - Reference to a Bollard Docker client.
/// * `image` - The name of the Docker image to pull.
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if updated successfully, error otherwise.
pub async fn update_docker_image(
docker: &Docker,
image: &str,
) -> Result<(), Box<dyn Error + Send + Sync>> {
println!("Updating to {}", image);
// 1. Pull new image
let mut stream = docker.create_image(
Some(CreateImageOptions {
from_image: Some(image.to_string()),
..Default::default()
}),
None,
None,
);
// Use the stream with proper trait bounds
while let Some(result) = StreamExt::next(&mut stream).await {
match result {
Ok(progress) => {
if let Some(status) = progress.status {
println!("Pull status: {}", status);
}
}
Err(e) => {
eprintln!("Error pulling image: {}", e);
break;
}
}
}
// 2. Restart the current container
let options = Some(ListContainersOptions {
all: true,
..Default::default()
});
let container_id = docker
.list_containers(options)
.await?
.into_iter()
.find_map(|c| {
c.image
.as_ref()
.and_then(|img| if img == image { c.id } else { None })
});
let _ = restart_container(docker, &container_id.unwrap()).await;
Ok(())
}
/// Restarts the agent's own Docker container.
///
/// # Arguments
/// * `docker` - Reference to a Bollard Docker client.
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if restarted successfully, error otherwise.
pub async fn restart_container(
docker: &Docker,
container_id: &str,
) -> Result<(), Box<dyn Error + Send + Sync>> {
println!("Restarting container {}", container_id);
if let Err(e) = docker
.restart_container(
&container_id.to_string(),
Some(RestartContainerOptions {
signal: None,
t: Some(0),
}),
)
.await
{
eprintln!("Failed to restart container: {}", e);
}
Ok(())
}
/*
/// Extracts a Docker container ID from a string line.
///
@@ -91,3 +166,49 @@ pub fn extract_client_container_id(line: &str) -> Option<String> {
// ...existing code...
}
*/
/// Gets network statistics for a specific container
pub async fn get_network_stats(
docker: &Docker,
container_id: &str,
) -> Result<ContainerNetworkInfo, Box<dyn Error + Send + Sync>> {
let (_, net_info, _, _) = stats::get_single_container_stats(docker, container_id).await?;
if let Some(net_info) = net_info {
Ok(net_info)
} else {
// Return default network info if not found
println!("No network info found for container {}", container_id);
Ok(ContainerNetworkInfo {
container_id: Some(container_id.to_string()),
rx_bytes: None,
tx_bytes: None,
rx_packets: None,
tx_packets: None,
rx_errors: None,
tx_errors: None,
})
}
}
/// Gets CPU statistics for a specific container
pub async fn get_cpu_stats(
docker: &Docker,
container_id: &str,
) -> Result<ContainerCpuInfo, Box<dyn Error + Send + Sync>> {
let (cpu_info, _, _, _) = stats::get_single_container_stats(docker, container_id).await?;
if let Some(cpu_info) = cpu_info {
Ok(cpu_info)
} else {
// Return default CPU info if not found
println!("No CPU info found for container {}", container_id);
Ok(ContainerCpuInfo {
container_id: Some(container_id.to_string()),
cpu_usage_percent: None,
system_cpu_usage: None,
container_cpu_usage: None,
online_cpus: None,
})
}
}

View File

@@ -1,4 +1,3 @@
/// # Docker Module
///
/// This module provides Docker integration for WatcherAgent, including container enumeration, statistics, and lifecycle management.
@@ -10,71 +9,330 @@
///
pub mod container;
pub mod serverclientcomm;
pub mod stats;
use crate::models::{
DockerCollectMetricDto, DockerContainer, DockerContainerCpuDto, DockerContainerInfo,
DockerContainerNetworkDto, DockerContainerRamDto, DockerMetricDto, DockerRegistrationDto,
DockerContainerStatusDto
};
use bollard::Docker;
use std::error::Error;
use crate::models::DockerContainer;
/// Aggregated Docker statistics for all managed containers.
///
/// # Fields
/// - `number`: Number of running containers (optional)
/// - `net_in_total`: Total network receive rate in **bytes per second (B/s)** (optional)
/// - `net_out_total`: Total network transmit rate in **bytes per second (B/s)** (optional)
/// - `dockers`: List of [`DockerContainer`] statistics (optional)
/// Main Docker manager that holds the Docker client and provides all operations
#[derive(Debug, Clone)]
pub struct DockerInfo {
pub number: Option<u16>,
pub net_in_total: Option<f64>,
pub net_out_total: Option<f64>,
pub dockers: Option<Vec<DockerContainer>>,
pub struct DockerManager {
pub docker: Docker,
}
impl DockerInfo {
/// Collects Docker statistics for all managed containers.
///
/// # Returns
/// * `Result<DockerInfo, Box<dyn Error + Send + Sync>>` - Aggregated Docker statistics or error if collection fails.
pub async fn collect() -> Result<Self, Box<dyn Error + Send + Sync>> {
Ok(Self { number: None, net_in_total: None, net_out_total: None, dockers: None })
impl Default for DockerManager {
fn default() -> Self {
Self {
docker: Docker::connect_with_local_defaults()
.unwrap_or_else(|e| panic!("Failed to create default Docker connection: {}", e)),
}
}
}
impl DockerManager {
/// Creates a new DockerManager instance
pub fn new() -> Result<Self, Box<dyn Error + Send + Sync>> {
let docker = Docker::connect_with_local_defaults()
.map_err(|e| format!("Failed to connect to Docker: {}", e))?;
Ok(Self { docker })
}
/// Creates a DockerManager instance with optional Docker connection
pub fn new_optional() -> Option<Self> {
Docker::connect_with_local_defaults()
.map(|docker| Self { docker })
.ok()
}
/// Finds the Docker container running the agent by image name
pub async fn get_client_container(
&self,
) -> Result<Option<DockerContainer>, Box<dyn Error + Send + Sync>> {
let containers = container::get_available_containers(&self.docker).await;
let client_image = "watcher-agent";
Ok(containers
.into_iter()
.find(|c| c.clone().image.unwrap().contains(client_image))
.map(|container| DockerContainer {
id: container.id,
image: container.image,
name: container.name,
}))
}
/// Gets the current client version (image name) if running in Docker
pub async fn get_client_version(&self) -> String {
match self.get_client_container().await {
Ok(Some(container)) => container
.image
.clone()
.unwrap()
.split(':')
.next()
.unwrap_or("unknown")
.to_string(),
Ok(None) => {
println!("Warning: No WatcherAgent container found");
"unknown".to_string()
}
Err(e) => {
println!("Warning: Could not get current image version: {}", e);
"unknown".to_string()
}
}
}
/// Checks if Docker is available and the agent is running in a container
pub async fn is_dockerized(&self) -> bool {
self.get_client_container()
.await
.map(|c| c.is_some())
.unwrap_or(false)
}
/// Gets all available containers as DTOs for registration
pub async fn get_containers(
&self,
) -> Result<Vec<DockerContainer>, Box<dyn Error + Send + Sync>> {
let containers = container::get_available_containers(&self.docker).await;
Ok(containers
.into_iter()
.map(|container| DockerContainer {
id: container.id,
image: container.image,
name: container.name,
})
.collect())
}
/// Gets the number of running containers
pub async fn get_container_count(&self) -> Result<usize, Box<dyn Error + Send + Sync>> {
let containers = container::get_available_containers(&self.docker).await;
Ok(containers.len())
}
/// Restarts a specific container by ID
pub async fn restart_container(
&self,
container_id: &str,
) -> Result<(), Box<dyn Error + Send + Sync>> {
container::restart_container(&self.docker, container_id).await
}
/// Collects Docker metrics for all containers
pub async fn collect_metrics(&self) -> Result<DockerMetricDto, Box<dyn Error + Send + Sync>> {
let containers = self.get_containers().await?;
// Get stats with status information
let stats_result = stats::get_container_stats(&self.docker).await;
let (cpu_stats, net_stats, mem_stats, status_stats) = match stats_result {
Ok(stats) => stats,
Err(e) => {
eprintln!("Warning: Failed to get container stats: {}", e);
// Return empty stats instead of failing completely
(Vec::new(), Vec::new(), Vec::new(), Vec::new())
}
};
println!(
"Debug: Found {} containers, {} CPU stats, {} network stats, {} memory stats, {} status stats",
containers.len(),
cpu_stats.len(),
net_stats.len(),
mem_stats.len(),
status_stats.len(),
);
let container_infos_total: Vec<_> = containers
.into_iter()
.map(|container| {
// Use short ID for matching (first 12 chars)
let container_short_id = if container.id.len() > 12 {
&container.id[..12]
} else {
&container.id
};
let cpu = cpu_stats
.iter()
.find(|c| {
c.container_id
.as_ref()
.map(|id| id.starts_with(container_short_id))
.unwrap_or(false)
})
.cloned();
let network = net_stats
.iter()
.find(|n| {
n.container_id
.as_ref()
.map(|id| id.starts_with(container_short_id))
.unwrap_or(false)
})
.cloned();
let ram = mem_stats
.iter()
.find(|m| {
m.container_id
.as_ref()
.map(|id| id.starts_with(container_short_id))
.unwrap_or(false)
})
.cloned();
let status = status_stats
.iter()
.find(|s| {
s.container_id
.as_ref()
.map(|id| id.starts_with(container_short_id))
.unwrap_or(false)
})
.cloned(); // Clone the entire ContainerStatusInfo
// Debug output for this container
if cpu.is_none() || network.is_none() || ram.is_none() {
println!(
"Debug: Container {} - CPU: {:?}, Network: {:?}, RAM: {:?}, Status {:?}",
container_short_id,
cpu.is_some(),
network.is_some(),
ram.is_some(),
status.is_some()
);
}
// Debug output for this container
if cpu.is_none() || network.is_none() || ram.is_none() || status.is_none() {
println!(
"Debug: Container {} - CPU: {:?}, Network: {:?}, RAM: {:?}, Status: {:?}",
container_short_id,
cpu.is_some(),
network.is_some(),
ram.is_some(),
status.is_some()
);
}
DockerContainerInfo {
container: Some(container),
status,
cpu,
network,
ram,
}
})
.collect();
let container_infos: Vec<DockerCollectMetricDto> = container_infos_total
.into_iter()
.filter_map(|info| {
let container = match info.container {
Some(c) => c,
None => {
eprintln!("Warning: Container info missing container data, skipping");
return None;
}
};
// Safely handle CPU data with defaults
let cpu_dto = if let Some(cpu) = info.cpu {
DockerContainerCpuDto {
cpu_load: cpu.cpu_usage_percent,
}
} else {
DockerContainerCpuDto { cpu_load: None }
};
// Safely handle RAM data with defaults
let ram_dto = if let Some(ram) = info.ram {
DockerContainerRamDto {
ram_load: ram.memory_usage_percent,
}
} else {
DockerContainerRamDto { ram_load: None }
};
// Safely handle network data with defaults
let network_dto = if let Some(net) = info.network {
DockerContainerNetworkDto {
net_in: net.rx_bytes.map(|bytes| bytes as f64),
net_out: net.tx_bytes.map(|bytes| bytes as f64),
}
} else {
DockerContainerNetworkDto {
net_in: None,
net_out: None,
}
};
let status_dto = if let Some(status_info) = info.status {
DockerContainerStatusDto {
status: status_info.status, // Extract the status string
}
} else {
DockerContainerStatusDto { status: None }
};
Some(DockerCollectMetricDto {
id: container.id,
status: status_dto,
cpu: cpu_dto,
ram: ram_dto,
network: network_dto,
})
})
.collect();
let dto = DockerMetricDto {
server_id: 0, // This should be set by the caller
containers: serde_json::to_value(&container_infos)?,
};
Ok(dto)
}
pub async fn create_registration_dto(
&self,
) -> Result<DockerRegistrationDto, Box<dyn Error + Send + Sync>> {
let containers = self.get_containers().await?;
let container_string = serde_json::to_value(&containers)?;
let dto = DockerRegistrationDto {
server_id: 0, // This will be set by the caller
containers: container_string,
};
Ok(dto)
}
}
// Keep these as utility functions if needed, but they should use DockerManager internally
impl DockerContainer {
/*
/// Restarts the specified Docker container by ID.
///
/// # Arguments
/// * `docker` - Reference to a Bollard Docker client
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if restarted successfully, error otherwise.
pub async fn restart_container(docker: &Docker) -> Result<(), Box<dyn Error + Send + Sync>> {
// ...existing code...
}
*/
/// Returns the container ID for a given [`DockerContainer`].
///
/// # Arguments
/// * `container` - Reference to a [`DockerContainer`]
///
/// # Returns
/// * `Result<u32, Box<dyn Error + Send + Sync>>` - Container ID as integer.
pub async fn get_docker_container_id(container: DockerContainer) -> Result<u32, Box<dyn Error + Send + Sync>> {
Ok(container.ID)
/// Returns the container ID
pub fn id(&self) -> &str {
&self.id
}
/// Returns the image name for a given [`DockerContainer`].
///
/// # Arguments
/// * `container` - Reference to a [`DockerContainer`]
///
/// # Returns
/// * `Result<String, Box<dyn Error + Send + Sync>>` - Image name as string.
pub async fn get_docker_container_image(container: DockerContainer) -> Result<String, Box<dyn Error + Send + Sync>> {
Ok(container.image)
/// Returns the image name
pub fn image(&self) -> &str {
&self.image.as_deref().unwrap_or("unknown")
}
/// Returns the container name
pub fn name(&self) -> &str {
&self.name.as_deref().unwrap_or("unknown")
}
}

View File

@@ -1,15 +1,13 @@
//! Server-client communication utilities for WatcherAgent
//!
//! Handles server commands, Docker image updates, and container management using the Bollard library.
//!
use crate::models::{DockerContainer, ServerMessage};
use crate::docker::container::{get_available_container};
use crate::models::ServerMessage;
use std::error::Error;
use super::container::{restart_container, update_docker_image};
//use bollard::query_parameters::{CreateImageOptions, RestartContainerOptions};
use bollard::Docker;
use bollard::query_parameters::{CreateImageOptions, RestartContainerOptions};
use futures_util::StreamExt;
use std::error::Error;
/// Handles a message from the backend server and dispatches the appropriate action.
///
@@ -19,7 +17,10 @@ use futures_util::StreamExt;
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if handled successfully, error otherwise.
pub async fn handle_server_message(docker: &Docker, msg: ServerMessage) -> Result<(), Box<dyn Error + Send + Sync>> {
pub async fn handle_server_message(
docker: &Docker,
msg: ServerMessage,
) -> Result<(), Box<dyn Error + Send + Sync>> {
let msg = msg.clone();
println!("Handling server message: {:?}", msg);
@@ -36,10 +37,14 @@ pub async fn handle_server_message(docker: &Docker, msg: ServerMessage) -> Resul
}
}
"restart_container" => {
println!("Received restart container command");
// Call your restart_container function here
restart_container(docker).await?;
if let Some(image_name) = msg.data.get("image").and_then(|v| v.as_str()) {
println!("Received restart command for image: {}", image_name);
// Call your update_docker_image function here
restart_container(docker, image_name).await?;
Ok(())
} else {
Err("Missing image name in update message".into())
}
}
"stop_agent" => {
println!("Received stop agent command");
@@ -52,87 +57,3 @@ pub async fn handle_server_message(docker: &Docker, msg: ServerMessage) -> Resul
}
}
}
/// Pulls a new Docker image and restarts the current container.
///
/// # Arguments
/// * `docker` - Reference to a Bollard Docker client.
/// * `image` - The name of the Docker image to pull.
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if updated successfully, error otherwise.
pub async fn update_docker_image(docker: &Docker, image: &str) -> Result<(), Box<dyn Error + Send + Sync>> {
println!("Updating to {}", image);
// 1. Pull new image
let mut stream = docker.create_image(
Some(CreateImageOptions {
from_image: Some(image.to_string()),
..Default::default()
}),
None,
None,
);
// Use the stream with proper trait bounds
while let Some(result) = StreamExt::next(&mut stream).await {
match result {
Ok(progress) => {
if let Some(status) = progress.status {
println!("Pull status: {}", status);
}
}
Err(e) => {
eprintln!("Error pulling image: {}", e);
break;
}
}
}
// 2. Restart the current container
let _ = restart_container(docker).await;
Ok(())
}
/// Finds the Docker container running the agent by image name.
///
/// # Arguments
/// * `docker` - Reference to a Bollard Docker client.
///
/// # Returns
/// * `Result<Option<DockerContainer>, Box<dyn Error + Send + Sync>>` - The agent's container info if found.
pub async fn get_client_container(docker: &Docker) -> Result<Option<DockerContainer>, Box<dyn Error + Send + Sync>> {
let containers = get_available_container(docker).await;
let client_image = "watcher-agent";
// Find container with the specific image
if let Some(container) = containers.iter().find(|c| c.image == client_image) {
Ok(Some(container.clone()))
} else {
Ok(None)
}
}
/// Restarts the agent's own Docker container.
///
/// # Arguments
/// * `docker` - Reference to a Bollard Docker client.
///
/// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if restarted successfully, error otherwise.
pub async fn restart_container(docker: &Docker) -> Result<(), Box<dyn Error + Send + Sync>> {
if let Ok(Some(container)) = get_client_container(docker).await {
let container_id = container.clone().ID;
println!("Restarting container {}", container_id);
if let Err(e) = docker.restart_container(&container_id.to_string(), Some(RestartContainerOptions { signal: None, t: Some(0) }))
.await
{
eprintln!("Failed to restart container: {}", e);
}
} else {
eprintln!("No container ID found (HOSTNAME not set?)");
}
Ok(())
}

View File

@@ -0,0 +1,99 @@
use super::ContainerCpuInfo;
use bollard::query_parameters::{ListContainersOptions, StatsOptions};
use bollard::Docker;
use futures_util::stream::TryStreamExt;
use std::error::Error;
/// Get CPU statistics for all containers
pub async fn get_all_containers_cpu_stats(
docker: &Docker,
) -> Result<Vec<ContainerCpuInfo>, Box<dyn Error + Send + Sync>> {
let containers = docker
.list_containers(Some(ListContainersOptions {
all: true,
..Default::default()
}))
.await?;
let mut cpu_infos = Vec::new();
for container in containers {
let id = container.id.unwrap_or_default();
// Skip if no ID
if id.is_empty() {
continue;
}
if let Some(cpu_info) = get_single_container_cpu_stats(docker, &id).await? {
cpu_infos.push(cpu_info);
}
}
Ok(cpu_infos)
}
/// Get CPU statistics for a specific container
pub async fn get_single_container_cpu_stats(
docker: &Docker,
container_id: &str,
) -> Result<Option<ContainerCpuInfo>, Box<dyn Error + Send + Sync>> {
let mut stats_stream = docker.stats(
container_id,
Some(StatsOptions {
stream: false,
one_shot: true,
}),
);
if let Some(stats) = stats_stream.try_next().await? {
if let (Some(cpu_stats), Some(precpu_stats)) = (&stats.cpu_stats, &stats.precpu_stats) {
if let (Some(cpu_usage), Some(pre_cpu_usage)) =
(&cpu_stats.cpu_usage, &precpu_stats.cpu_usage)
{
let cpu_delta = cpu_usage
.total_usage
.unwrap_or(0)
.saturating_sub(pre_cpu_usage.total_usage.unwrap_or(0));
let system_delta = cpu_stats
.system_cpu_usage
.unwrap_or(0)
.saturating_sub(precpu_stats.system_cpu_usage.unwrap_or(0));
let online_cpus = cpu_stats.online_cpus.unwrap_or(1);
let cpu_percent = if system_delta > 0 && online_cpus > 0 {
(cpu_delta as f64 / system_delta as f64) * online_cpus as f64 * 100.0
} else {
0.0
};
return Ok(Some(ContainerCpuInfo {
container_id: Some(container_id.to_string()),
cpu_usage_percent: Some(cpu_percent),
system_cpu_usage: Some(cpu_stats.system_cpu_usage.unwrap_or(0)),
container_cpu_usage: Some(cpu_usage.total_usage.unwrap_or(0)),
online_cpus: Some(online_cpus),
}));
}
}
}
Ok(None)
}
/// Get average CPU usage across all containers
pub async fn get_average_cpu_usage(docker: &Docker) -> Result<f64, Box<dyn Error + Send + Sync>> {
let cpu_infos = get_all_containers_cpu_stats(docker).await?;
if cpu_infos.is_empty() {
return Ok(0.0);
}
let total_cpu: f64 = cpu_infos
.iter()
.map(|cpu| cpu.cpu_usage_percent.unwrap())
.sum();
Ok(total_cpu / cpu_infos.len() as f64)
}

View File

@@ -0,0 +1,101 @@
pub mod cpu;
pub mod network;
pub mod ram;
pub mod status;
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ContainerStatusInfo {
pub container_id: Option<String>,
pub status: Option<String>, // "running", "stopped", "paused", "exited", etc.
pub state: Option<String>, // More detailed state information
pub started_at: Option<String>,
pub finished_at: Option<String>,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ContainerCpuInfo {
pub container_id: Option<String>,
pub cpu_usage_percent: Option<f64>,
pub system_cpu_usage: Option<u64>,
pub container_cpu_usage: Option<u64>,
pub online_cpus: Option<u32>,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ContainerNetworkInfo {
pub container_id: Option<String>,
pub rx_bytes: Option<u64>,
pub tx_bytes: Option<u64>,
pub rx_packets: Option<u64>,
pub tx_packets: Option<u64>,
pub rx_errors: Option<u64>,
pub tx_errors: Option<u64>,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ContainerMemoryInfo {
pub container_id: Option<String>,
pub memory_usage: Option<u64>,
pub memory_limit: Option<u64>,
pub memory_usage_percent: Option<f64>,
}
use bollard::Docker;
use std::error::Error;
/// Get container statistics for all containers using an existing Docker client
pub async fn get_container_stats(
docker: &Docker,
) -> Result<
(
Vec<ContainerCpuInfo>,
Vec<ContainerNetworkInfo>,
Vec<ContainerMemoryInfo>,
Vec<ContainerStatusInfo>,
),
Box<dyn Error + Send + Sync>,
> {
let cpu_infos = cpu::get_all_containers_cpu_stats(docker).await?;
let net_infos = network::get_all_containers_network_stats(docker).await?;
let mem_infos = ram::get_all_containers_memory_stats(docker).await?;
let status_infos = status::get_all_containers_status(docker).await?;
Ok((cpu_infos, net_infos, mem_infos, status_infos))
}
/// Get container statistics for a specific container
pub async fn get_single_container_stats(
docker: &Docker,
container_id: &str,
) -> Result<(
Option<ContainerCpuInfo>,
Option<ContainerNetworkInfo>,
Option<ContainerMemoryInfo>,
Option<ContainerStatusInfo>,
), Box<dyn Error + Send + Sync>> {
let cpu_info = cpu::get_single_container_cpu_stats(docker, container_id).await?;
let net_info = network::get_single_container_network_stats(docker, container_id).await?;
let mem_info = ram::get_single_container_memory_stats(docker, container_id).await?;
let status_info = status::get_single_container_status(docker, container_id).await?;
Ok((cpu_info, net_info, mem_info, status_info))
}
/// Get total network statistics across all containers
pub async fn get_total_network_stats(
docker: &Docker,
) -> Result<(u64, u64), Box<dyn Error + Send + Sync>> {
network::get_total_network_stats(docker).await
}
/// Get average CPU usage across all containers
pub async fn get_average_cpu_usage(docker: &Docker) -> Result<f64, Box<dyn Error + Send + Sync>> {
cpu::get_average_cpu_usage(docker).await
}
/// Get total memory usage across all containers
pub async fn get_total_memory_usage(docker: &Docker) -> Result<u64, Box<dyn Error + Send + Sync>> {
ram::get_total_memory_usage(docker).await
}

View File

@@ -0,0 +1,79 @@
use super::ContainerNetworkInfo;
use bollard::query_parameters::{ListContainersOptions, StatsOptions};
use bollard::Docker;
use futures_util::stream::TryStreamExt;
use std::error::Error;
/// Get network statistics for all containers
pub async fn get_all_containers_network_stats(
docker: &Docker,
) -> Result<Vec<ContainerNetworkInfo>, Box<dyn Error + Send + Sync>> {
let containers = docker
.list_containers(Some(ListContainersOptions {
all: true,
..Default::default()
}))
.await?;
let mut net_infos = Vec::new();
for container in containers {
let id = container.id.unwrap_or_default();
// Skip if no ID
if id.is_empty() {
continue;
}
if let Some(net_info) = get_single_container_network_stats(docker, &id).await? {
net_infos.push(net_info);
}
}
Ok(net_infos)
}
/// Get network statistics for a specific container
pub async fn get_single_container_network_stats(
docker: &Docker,
container_id: &str,
) -> Result<Option<ContainerNetworkInfo>, Box<dyn Error + Send + Sync>> {
let mut stats_stream = docker.stats(
container_id,
Some(StatsOptions {
stream: false,
one_shot: true,
}),
);
if let Some(stats) = stats_stream.try_next().await? {
if let Some(networks) = stats.networks {
// Take the first network interface (usually eth0)
if let Some((_name, net)) = networks.into_iter().next() {
return Ok(Some(ContainerNetworkInfo {
container_id: Some(container_id.to_string()),
rx_bytes: net.rx_bytes,
tx_bytes: net.tx_bytes,
rx_packets: net.rx_packets,
tx_packets: net.tx_packets,
rx_errors: net.rx_errors,
tx_errors: net.tx_errors,
}));
}
}
}
Ok(None)
}
/// Get total network statistics across all containers
pub async fn get_total_network_stats(
docker: &Docker,
) -> Result<(u64, u64), Box<dyn Error + Send + Sync>> {
let net_infos = get_all_containers_network_stats(docker).await?;
let total_rx: u64 = net_infos.iter().map(|net| net.rx_bytes.unwrap()).sum();
let total_tx: u64 = net_infos.iter().map(|net| net.tx_bytes.unwrap()).sum();
Ok((total_rx, total_tx))
}

View File

@@ -0,0 +1,77 @@
use super::ContainerMemoryInfo;
use bollard::query_parameters::{ListContainersOptions, StatsOptions};
use bollard::Docker;
use futures_util::stream::TryStreamExt;
use std::error::Error;
/// Get memory statistics for all containers
pub async fn get_all_containers_memory_stats(
docker: &Docker,
) -> Result<Vec<ContainerMemoryInfo>, Box<dyn Error + Send + Sync>> {
let containers = docker
.list_containers(Some(ListContainersOptions {
all: true,
..Default::default()
}))
.await?;
let mut mem_infos = Vec::new();
for container in containers {
let id = container.id.unwrap_or_default();
// Skip if no ID
if id.is_empty() {
continue;
}
if let Some(mem_info) = get_single_container_memory_stats(docker, &id).await? {
mem_infos.push(mem_info);
}
}
Ok(mem_infos)
}
/// Get memory statistics for a specific container
pub async fn get_single_container_memory_stats(
docker: &Docker,
container_id: &str,
) -> Result<Option<ContainerMemoryInfo>, Box<dyn Error + Send + Sync>> {
let mut stats_stream = docker.stats(
container_id,
Some(StatsOptions {
stream: false,
one_shot: true,
}),
);
if let Some(stats) = stats_stream.try_next().await? {
if let Some(memory_stats) = &stats.memory_stats {
let memory_usage = memory_stats.usage.unwrap_or(0);
let memory_limit = memory_stats.limit.unwrap_or(1); // Avoid division by zero
let memory_usage_percent = if memory_limit > 0 {
(memory_usage as f64 / memory_limit as f64) * 100.0
} else {
0.0
};
return Ok(Some(ContainerMemoryInfo {
container_id: Some(container_id.to_string()),
memory_usage: Some(memory_usage),
memory_limit: Some(memory_limit),
memory_usage_percent: Some(memory_usage_percent),
}));
}
}
Ok(None)
}
/// Get total memory usage across all containers
pub async fn get_total_memory_usage(docker: &Docker) -> Result<u64, Box<dyn Error + Send + Sync>> {
let mem_infos = get_all_containers_memory_stats(docker).await?;
let total_memory: u64 = mem_infos.iter().map(|mem| mem.memory_usage.unwrap()).sum();
Ok(total_memory)
}

View File

@@ -0,0 +1,126 @@
use super::ContainerStatusInfo;
use std::error::Error;
use bollard::Docker;
use bollard::query_parameters::{ListContainersOptions, InspectContainerOptions};
use bollard::models::{ContainerSummaryStateEnum, ContainerStateStatusEnum};
/// Get status information for all containers
pub async fn get_all_containers_status(
docker: &Docker,
) -> Result<Vec<ContainerStatusInfo>, Box<dyn Error + Send + Sync>> {
let containers = docker
.list_containers(Some(ListContainersOptions {
all: true, // Include stopped containers
..Default::default()
}))
.await?;
let mut status_infos = Vec::new();
for container in containers {
let id = container.id.unwrap_or_default();
if id.is_empty() {
continue;
}
// Convert ContainerSummaryStateEnum to String
let status = container.state.map(|state| match state {
ContainerSummaryStateEnum::CREATED => "created".to_string(),
ContainerSummaryStateEnum::RUNNING => "running".to_string(),
ContainerSummaryStateEnum::PAUSED => "paused".to_string(),
ContainerSummaryStateEnum::RESTARTING => "restarting".to_string(),
ContainerSummaryStateEnum::REMOVING => "removing".to_string(),
ContainerSummaryStateEnum::EXITED => "exited".to_string(),
ContainerSummaryStateEnum::DEAD => "dead".to_string(),
_ => "unknown".to_string(),
});
// Convert timestamp from i64 to String
let started_at = container.created.map(|timestamp| timestamp.to_string());
status_infos.push(ContainerStatusInfo {
container_id: Some(id.clone()),
status,
state: container.status,
started_at,
finished_at: None, // Docker API doesn't provide finished_at in list
});
}
Ok(status_infos)
}
/// Get status information for a specific container
pub async fn get_single_container_status(
docker: &Docker,
container_id: &str,
) -> Result<Option<ContainerStatusInfo>, Box<dyn Error + Send + Sync>> {
// First try to get from list (faster)
let containers = docker
.list_containers(Some(ListContainersOptions {
all: true,
..Default::default()
}))
.await?;
if let Some(container) = containers.into_iter().find(|c| {
c.id.as_ref().map(|id| id == container_id).unwrap_or(false)
}) {
// Convert ContainerSummaryStateEnum to String
let status = container.state.map(|state| match state {
ContainerSummaryStateEnum::CREATED => "created".to_string(),
ContainerSummaryStateEnum::RUNNING => "running".to_string(),
ContainerSummaryStateEnum::PAUSED => "paused".to_string(),
ContainerSummaryStateEnum::RESTARTING => "restarting".to_string(),
ContainerSummaryStateEnum::REMOVING => "removing".to_string(),
ContainerSummaryStateEnum::EXITED => "exited".to_string(),
ContainerSummaryStateEnum::DEAD => "dead".to_string(),
_ => "unknown".to_string(),
});
// Convert timestamp from i64 to String
let started_at = container.created.map(|timestamp| timestamp.to_string());
return Ok(Some(ContainerStatusInfo {
container_id: Some(container_id.to_string()),
status,
state: container.status,
started_at,
finished_at: None,
}));
}
// Fallback to inspect for more detailed info
match docker.inspect_container(container_id, None::<InspectContainerOptions>).await {
Ok(container_details) => {
let state = container_details.state.unwrap_or_default();
// Convert ContainerStateStatusEnum to String
let status = state.status.map(|status_enum| match status_enum {
ContainerStateStatusEnum::CREATED => "created".to_string(),
ContainerStateStatusEnum::RUNNING => "running".to_string(),
ContainerStateStatusEnum::PAUSED => "paused".to_string(),
ContainerStateStatusEnum::RESTARTING => "restarting".to_string(),
ContainerStateStatusEnum::REMOVING => "removing".to_string(),
ContainerStateStatusEnum::EXITED => "exited".to_string(),
ContainerStateStatusEnum::DEAD => "dead".to_string(),
_ => "unknown".to_string(),
});
// These are already Option<String> from the Docker API
let started_at = state.clone().started_at;
let finished_at = state.clone().finished_at;
Ok(Some(ContainerStatusInfo {
container_id: Some(container_id.to_string()),
status,
state: Some(format!("{:?}", state)), // Convert state to string
started_at,
finished_at,
}))
}
Err(_) => Ok(None), // Container not found
}
}

View File

@@ -1,4 +1,3 @@
/// # WatcherAgent
///
/// **WatcherAgent** is a cross-platform system monitoring agent written in Rust.
@@ -26,18 +25,15 @@
/// ```
///
/// The agent will register itself, start collecting metrics, and listen for remote commands.
pub mod api;
pub mod docker;
pub mod hardware;
pub mod metrics;
pub mod models;
pub mod docker;
use tokio::task::JoinHandle;
use bollard::Docker;
use std::env;
use std::error::Error;
use tokio::task::JoinHandle;
/// Awaits a spawned asynchronous task and flattens its nested `Result` type.
///
@@ -82,26 +78,8 @@ async fn flatten<T>(
/// Returns an error if registration or any background task fails, or if required arguments are missing.
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error + Send + Sync>> {
// Initialize Docker client
let docker = Docker::connect_with_local_defaults()
.map_err(|e| format!("Failed to connect to Docker: {}", e))?;
// Get current image version
let client_version = match docker::serverclientcomm::get_client_container(&docker).await {
Ok(Some(version)) => version.image,
Ok(None) => {
eprintln!("Warning: No image version found");
"unknown".to_string()
}
Err(e) => {
eprintln!("Warning: Could not get current image version: {}", e);
"unknown".to_string()
}
};
println!("Client Version: {}", client_version);
// Parse command-line arguments
let args: Vec<String> = env::args().collect();
// args[0] is the binary name, args[1] is the first actual argument
if args.len() < 2 {
eprintln!("Usage: {} <server-url>", args[0]);
return Err("Missing server URL argument".into());
@@ -111,20 +89,53 @@ async fn main() -> Result<(), Box<dyn Error + Send + Sync>> {
// Registration with backend server
let (server_id, ip) = match api::register_with_server(&server_url).await {
Ok((id, ip)) => (id, ip),
Ok((id, ip)) => {
println!("Registered with server. ID: {}, IP: {}", id, ip);
(id, ip)
}
Err(e) => {
eprintln!("Fehler bei der Registrierung am Server: {e}");
return Err(e);
}
};
// Initialize Docker (optional - agent can run without Docker)
let docker_manager = docker::DockerManager::new_optional();
// Get current image version
let client_version = if let Some(ref docker_manager) = docker_manager {
docker_manager.get_client_version().await
} else {
"unknown".to_string()
};
println!("Client Version: {}", client_version);
// Prepare Docker registration DTO
let container_dto = if let Some(ref docker_manager) = docker_manager {
docker_manager.create_registration_dto().await?
} else {
println!("Fallback for failing registration");
models::DockerRegistrationDto {
server_id: 0,
//container_count: 0, --- IGNORE ---
containers: serde_json::to_value(&"")?,
}
};
let _ =
api::broadcast_docker_containers(server_url, server_id, &mut container_dto.clone()).await?;
// Start background tasks
// Start server listening for commands
let listening_handle = tokio::spawn({
let docker = docker.clone();
// Start server listening for commands (only if Docker is available)
let listening_handle = if let Some(ref docker_manager) = docker_manager {
tokio::spawn({
let docker = docker_manager.docker.clone();
let server_url = server_url.to_string();
async move { api::listening_to_server(&docker, &server_url).await }
});
})
} else {
println!("Docker not available, skipping server command listener.");
tokio::spawn(async { Ok(()) }) // Dummy task
};
// Start heartbeat in background
let heartbeat_handle = tokio::spawn({
@@ -138,9 +149,16 @@ async fn main() -> Result<(), Box<dyn Error + Send + Sync>> {
let metrics_handle = tokio::spawn({
let ip = ip.clone();
let server_url = server_url.to_string();
let docker_manager = docker_manager.as_ref().cloned().unwrap();
async move {
let mut collector = metrics::Collector::new(server_id, ip);
collector.run(&server_url).await
let mut collector = metrics::Collector::new(server_id, ip, docker_manager);
if let Err(e) = collector.run(&server_url).await {
eprintln!("Metrics collection error: {}", e);
// Don't panic, just return the error
Err(e)
} else {
Ok(())
}
}
});

View File

@@ -1,5 +1,3 @@
/// # Metrics Module
///
/// This module orchestrates the collection and reporting of hardware and network metrics for WatcherAgent.
@@ -15,10 +13,11 @@ use std::error::Error;
use std::time::Duration;
use crate::api;
use crate::docker::DockerManager;
//use crate::docker::DockerInfo;
use crate::hardware::network::NetworkMonitor;
use crate::hardware::HardwareInfo;
use crate::models::MetricDto;
use crate::models::{DockerMetricDto, MetricDto};
/// Main orchestrator for hardware and network metric collection and reporting.
///
@@ -29,12 +28,12 @@ use crate::models::MetricDto;
/// - `server_id`: Unique server ID assigned by the backend.
/// - `ip_address`: IP address of the agent.
pub struct Collector {
docker_manager: DockerManager,
network_monitor: NetworkMonitor,
server_id: i32,
server_id: u16,
ip_address: String,
}
impl Collector {
/// Creates a new `Collector` instance for metric collection and reporting.
///
@@ -44,8 +43,9 @@ impl Collector {
///
/// # Returns
/// A new `Collector` ready to collect and report metrics.
pub fn new(server_id: i32, ip_address: String) -> Self {
pub fn new(server_id: u16, ip_address: String, docker_manager: DockerManager) -> Self {
Self {
docker_manager,
network_monitor: NetworkMonitor::new(),
server_id,
ip_address,
@@ -75,7 +75,16 @@ impl Collector {
continue;
}
};
let docker_metrics = match self.docker_collect().await {
Ok(metrics) => metrics,
Err(e) => {
eprintln!("Error collecting docker metrics: {}", e);
tokio::time::sleep(Duration::from_secs(10)).await;
continue;
}
};
api::send_metrics(base_url, &metrics).await?;
api::send_docker_metrics(base_url, &docker_metrics).await?;
tokio::time::sleep(Duration::from_secs(20)).await;
}
}
@@ -109,10 +118,20 @@ impl Collector {
ram_load: hardware.memory.current_load.unwrap_or_default(),
ram_size: hardware.memory.total_size.unwrap_or_default(),
disk_size: hardware.disk.total_size.unwrap_or_default(),
disk_usage: hardware.disk.total_used.unwrap_or_default(),
disk_usage: hardware.disk.total_usage.unwrap_or_default(),
disk_temp: 0.0, // not supported
net_rx: hardware.network.rx_rate.unwrap_or_default(),
net_tx: hardware.network.tx_rate.unwrap_or_default(),
})
}
/// NOTE: This is a compilation-safe stub. Implement the Docker collection using your
/// DockerManager API and container helpers when available.
pub async fn docker_collect(&self) -> Result<DockerMetricDto, Box<dyn Error + Send + Sync>> {
let metrics = self.docker_manager.collect_metrics().await?;
Ok(DockerMetricDto {
server_id: self.server_id,
containers: metrics.containers,
})
}
}

View File

@@ -1,5 +1,3 @@
/// # Models Module
///
/// This module defines all data structures (DTOs) used for communication between WatcherAgent and the backend server, as well as hardware metrics and Docker container info.
@@ -11,7 +9,10 @@
///
/// ## Usage
/// These types are serialized/deserialized for HTTP communication and used throughout the agent for data exchange.
use crate::docker::stats;
use serde::{Deserialize, Serialize};
use serde_json::Value;
/// Registration data sent to the backend server.
///
@@ -25,7 +26,7 @@ use serde::{Deserialize, Serialize};
#[derive(Serialize, Debug)]
pub struct RegistrationDto {
#[serde(rename = "id")]
pub id: i32,
pub server_id: u16,
#[serde(rename = "ipAddress")]
pub ip_address: String,
#[serde(rename = "cpuType")]
@@ -59,7 +60,7 @@ pub struct RegistrationDto {
#[derive(Serialize, Debug)]
pub struct MetricDto {
#[serde(rename = "serverId")]
pub server_id: i32,
pub server_id: u16,
#[serde(rename = "ipAddress")]
pub ip_address: String,
#[serde(rename = "cpu_Load")]
@@ -116,7 +117,7 @@ pub struct DiskInfoDetailed {
/// - `ip_address`: IPv4 or IPv6 address (string)
#[derive(Deserialize)]
pub struct IdResponse {
pub id: i32,
pub id: u16,
#[serde(rename = "ipAddress")]
pub ip_address: String,
}
@@ -182,16 +183,91 @@ pub struct Acknowledgment {
/// - `image`: Docker image name (string)
/// - `Name`: Container name (string)
/// - `Status`: Container status ("running", "stopped", etc.)
/// - `_net_in`: Network receive rate in **bytes per second (B/s)**
/// - `_net_out`: Network transmit rate in **bytes per second (B/s)**
/// - `_cpu_load`: CPU usage as a percentage (**0.0100.0**)
#[derive(Debug, Serialize, Clone)]
pub struct DockerRegistrationDto {
/// Unique server identifier (integer)
#[serde(rename = "Server_id")]
pub server_id: u16,
/// Number of currently running containers
// pub container_count: usize, --- IGNORE ---
/// json stringified array of DockerContainer
///
/// ## Json Example
/// json format: [{"id":"234dsf234","image":"nginx:latest","name":"webserver"},...]
///
/// ## Fields
/// id: unique container ID (first 12 hex digits)
/// image: docker image name
/// name: container name
#[serde(rename = "Containers")]
pub containers: Value, // Vec<DockerContainer>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerMetricDto {
pub server_id: u16,
/// json stringified array of DockerContainer
///
/// ## Json Example
/// json format: [{"id":"234dsf234","status":"running","image":"nginx:latest","name":"webserver","network":{"net_in":1024,"net_out":2048},"cpu":{"cpu_load":12.5},"ram":{"ram_load":10.0}},...]
///
/// ## Fields
/// id: unique container ID (first 12 hex digits)
/// status: "running";"stopped";others
/// image: docker image name
/// name: container name
/// network: network stats
/// cpu: cpu stats
/// ram: ram stats
pub containers: Value, // Vec<DockerContainerInfo>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerCollectMetricDto {
pub id: String,
pub status: DockerContainerStatusDto,
pub cpu: DockerContainerCpuDto,
pub ram: DockerContainerRamDto,
pub network: DockerContainerNetworkDto,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainerStatusDto {
pub status: Option<String>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainerCpuDto {
pub cpu_load: Option<f64>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainerRamDto {
pub ram_load: Option<f64>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainerNetworkDto {
pub net_in: Option<f64>,
pub net_out: Option<f64>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainerInfo {
pub container: Option<DockerContainer>,
pub status: Option<stats::ContainerStatusInfo>, // "running";"stopped";others
pub network: Option<stats::ContainerNetworkInfo>,
pub cpu: Option<stats::ContainerCpuInfo>,
pub ram: Option<stats::ContainerMemoryInfo>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainer {
pub ID: u32,
pub image: String,
pub Name: String,
pub Status: String, // "running";"stopped";others
pub _net_in: f64,
pub _net_out: f64,
pub _cpu_load: f64,
pub id: String,
#[serde(default)]
pub image: Option<String>,
#[serde(default)]
pub name: Option<String>,
}

View File

@@ -0,0 +1,44 @@
networks:
watcher-network:
driver: bridge
services:
watcher:
image: git.triggermeelmo.com/watcher/watcher-server:v0.1.11
container_name: watcher
deploy:
resources:
limits:
memory: 200M
restart: unless-stopped
env_file: .env
ports:
- "5000:5000"
volumes:
- ./watcher-volumes/data:/app/persistence
- ./watcher-volumes/dumps:/app/wwwroot/downloads/sqlite
- ./watcher-volumes/logs:/app/logs
watcher-agent:
image: git.triggermeelmo.com/donpat1to/watcher-agent:v0.1.28
container_name: watcher-agent
restart: always
privileged: true # Grants full hardware access (use with caution)
env_file: .env
pid: "host"
volumes:
# Mount critical system paths for hardware monitoring
- /sys:/sys:ro # CPU/GPU temps, sensors
- /proc:/proc # Process/CPU stats
- /dev:/dev:ro # Disk/GPU device access
- /var/run/docker.sock:/var/run/docker.sock # Docker API access
- /:/root:ro # Access to for df-command
# Application volumes
- ./config:/app/config:ro
- ./logs:/app/logs
network_mode: host # Uses host network (for correct IP/interface detection)
healthcheck:
test: [ "CMD", "/usr/local/bin/WatcherAgent", "healthcheck" ]
interval: 30s
timeout: 3s
retries: 3

View File

@@ -1,23 +1,20 @@
watcher-agent:
image: git.triggermeelmo.com/donpat1to/watcher-agent:development
container_name: watcher-agent
restart: always
privileged: true # Grants full hardware access (use with caution)
networks:
watcher-network:
driver: bridge
services:
watcher:
image: git.triggermeelmo.com/watcher/watcher-server:v0.1.11
container_name: watcher
deploy:
resources:
limits:
memory: 200M
restart: unless-stopped
env_file: .env
pid: "host"
ports:
- "5000:5000"
volumes:
# Mount critical system paths for hardware monitoring
- /sys:/sys:ro # CPU/GPU temps, sensors
- /proc:/proc # Process/CPU stats
- /dev:/dev:ro # Disk/GPU device access
- /var/run/docker.sock:/var/run/docker.sock # Docker API access
- /:/root:ro # Access to for df-command
# Application volumes
- ./config:/app/config:ro
- ./logs:/app/logs
network_mode: host # Uses host network (for correct IP/interface detection)
healthcheck:
test: ["CMD", "/usr/local/bin/WatcherAgent", "healthcheck"]
interval: 30s
timeout: 3s
retries: 3
- ./watcher-volumes/data:/app/persistence
- ./watcher-volumes/dumps:/app/wwwroot/downloads/sqlite
- ./watcher-volumes/logs:/app/logs