Compare commits

...

25 Commits

Author SHA1 Message Date
7d6b5165c1 changed noting
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m12s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m3s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m57s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m15s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 4s
2025-10-29 22:45:39 +01:00
2a4cc4b2d5 added status in docker metric collect
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m9s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m10s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 4m0s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m12s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 4s
2025-10-29 22:23:58 +01:00
c36b17fa05 fixed json in api call 2025-10-29 21:35:54 +01:00
375b4450f0 fixed json formatting 2025-10-29 21:07:29 +01:00
b134be4c88 updated .env 2025-10-29 14:57:42 +01:00
6afd5d0fcd json as string 2025-10-29 14:26:56 +01:00
e02914516d removed db files
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m17s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m32s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 4m30s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m39s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 6s
2025-10-29 14:25:09 +01:00
bf90d3ceb9 added docker compose file 2025-10-29 14:24:26 +01:00
a8ccb0521a updated models to parse json better
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m19s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m28s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 4m26s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m31s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-29 12:11:30 +01:00
c90a276dca added error handling
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 3s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m8s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m4s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m55s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m11s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-28 11:20:12 +01:00
dc4c23f9d9 remoeved mut input attribute in broadcast docker container
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m5s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m43s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m26s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 3s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 1m58s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 4s
2025-10-27 23:28:23 +01:00
3182d57539 added documentation for broadcasting docker container 2025-10-27 23:25:30 +01:00
8c1ef7f9f6 removed unused imports
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m1s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m38s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m28s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m2s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 4s
2025-10-27 23:07:48 +01:00
16020eea50 added error handling in metrics handle 2025-10-27 23:03:49 +01:00
432a798210 updated models 2025-10-27 21:58:35 +01:00
a095444222 fixed stuck in loop
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m8s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m26s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m47s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m16s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-09 11:14:13 +02:00
5e7bc3df54 added debugging
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m12s
Rust Cross-Platform Build / Set Tag Name (push) Has been cancelled
Rust Cross-Platform Build / Build and Push Docker Image (push) Has been cancelled
Rust Cross-Platform Build / Create Tag (push) Has been cancelled
Rust Cross-Platform Build / Workflow Summary (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Has been cancelled
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Has been cancelled
2025-10-09 11:10:53 +02:00
1c7a169956 added container broadcasting
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 5s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m14s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 3m18s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 4m4s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m19s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-09 10:59:21 +02:00
c7bce926e9 added docker registration dto 2025-10-09 10:39:52 +02:00
711083daa0 fixed server message handle 2025-10-06 13:01:03 +02:00
06cec6ff9f added Option to data structs 2025-10-06 12:43:15 +02:00
a7cae5e93f added docker metrics
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m3s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m56s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m37s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 5s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m12s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-05 22:43:48 +02:00
66428863e6 moved stats into own folder
Some checks failed
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Failing after 1m9s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Has been skipped
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Has been skipped
Rust Cross-Platform Build / Set Tag Name (push) Has been skipped
Rust Cross-Platform Build / Build and Push Docker Image (push) Has been skipped
Rust Cross-Platform Build / Create Tag (push) Has been skipped
Rust Cross-Platform Build / Workflow Summary (push) Successful in 2s
2025-10-05 15:21:24 +02:00
b35cac0dbe changed Dtos and Docker structs 2025-10-04 22:05:28 +02:00
bb55b46c34 fixed image version detectionr
All checks were successful
Rust Cross-Platform Build / Detect Rust Project (push) Successful in 4s
Rust Cross-Platform Build / Run Tests (push) Successful in 1m4s
Rust Cross-Platform Build / Build (x86_64-unknown-linux-gnu) (push) Successful in 2m49s
Rust Cross-Platform Build / Build (x86_64-pc-windows-gnu) (push) Successful in 3m35s
Rust Cross-Platform Build / Set Tag Name (push) Successful in 4s
Rust Cross-Platform Build / Build and Push Docker Image (push) Successful in 2m8s
Rust Cross-Platform Build / Workflow Summary (push) Successful in 1s
Rust Cross-Platform Build / Create Tag (push) Successful in 5s
2025-10-04 20:38:20 +02:00
18 changed files with 1602 additions and 372 deletions

1
.env Normal file
View File

@@ -0,0 +1 @@
SERVER_URL=http://localhost:5000

3
.gitignore vendored
View File

@@ -17,6 +17,9 @@ Cargo.lock
# MSVC Windows builds of rustc generate these, which store debugging information # MSVC Windows builds of rustc generate these, which store debugging information
*.pdb *.pdb
.env
watcher-volumes
# RustRover # RustRover
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore

578
WatcherAgent/- Normal file
View File

@@ -0,0 +1,578 @@
#!/bin/sh
#
# This script should be run via curl:
# sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
# or via wget:
# sh -c "$(wget -qO- https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
# or via fetch:
# sh -c "$(fetch -o - https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
#
# As an alternative, you can first download the install script and run it afterwards:
# wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh
# sh install.sh
#
# You can tweak the install behavior by setting variables when running the script. For
# example, to change the path to the Oh My Zsh repository:
# ZSH=~/.zsh sh install.sh
#
# Respects the following environment variables:
# ZDOTDIR - path to Zsh dotfiles directory (default: unset). See [1][2]
# [1] https://zsh.sourceforge.io/Doc/Release/Parameters.html#index-ZDOTDIR
# [2] https://zsh.sourceforge.io/Doc/Release/Files.html#index-ZDOTDIR_002c-use-of
# ZSH - path to the Oh My Zsh repository folder (default: $HOME/.oh-my-zsh)
# REPO - name of the GitHub repo to install from (default: ohmyzsh/ohmyzsh)
# REMOTE - full remote URL of the git repo to install (default: GitHub via HTTPS)
# BRANCH - branch to check out immediately after install (default: master)
#
# Other options:
# CHSH - 'no' means the installer will not change the default shell (default: yes)
# RUNZSH - 'no' means the installer will not run zsh after the install (default: yes)
# KEEP_ZSHRC - 'yes' means the installer will not replace an existing .zshrc (default: no)
# OVERWRITE_CONFIRMATION - 'no' means the installer will not ask for confirmation to overwrite the existing .zshrc (default: yes)
#
# You can also pass some arguments to the install script to set some these options:
# --skip-chsh: has the same behavior as setting CHSH to 'no'
# --unattended: sets both CHSH and RUNZSH to 'no'
# --keep-zshrc: sets KEEP_ZSHRC to 'yes'
# For example:
# sh install.sh --unattended
# or:
# sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" "" --unattended
#
set -e
# Make sure important variables exist if not already defined
#
# $USER is defined by login(1) which is not always executed (e.g. containers)
# POSIX: https://pubs.opengroup.org/onlinepubs/009695299/utilities/id.html
USER=${USER:-$(id -u -n)}
# $HOME is defined at the time of login, but it could be unset. If it is unset,
# a tilde by itself (~) will not be expanded to the current user's home directory.
# POSIX: https://pubs.opengroup.org/onlinepubs/009696899/basedefs/xbd_chap08.html#tag_08_03
HOME="${HOME:-$(getent passwd $USER 2>/dev/null | cut -d: -f6)}"
# macOS does not have getent, but this works even if $HOME is unset
HOME="${HOME:-$(eval echo ~$USER)}"
# Track if $ZSH was provided
custom_zsh=${ZSH:+yes}
# Use $zdot to keep track of where the directory is for zsh dotfiles
# To check if $ZDOTDIR was provided, explicitly check for $ZDOTDIR
zdot="${ZDOTDIR:-$HOME}"
# Default value for $ZSH
# a) if $ZDOTDIR is supplied and not $HOME: $ZDOTDIR/ohmyzsh
# b) otherwise, $HOME/.oh-my-zsh
if [ -n "$ZDOTDIR" ] && [ "$ZDOTDIR" != "$HOME" ]; then
ZSH="${ZSH:-$ZDOTDIR/ohmyzsh}"
fi
ZSH="${ZSH:-$HOME/.oh-my-zsh}"
# Default settings
REPO=${REPO:-ohmyzsh/ohmyzsh}
REMOTE=${REMOTE:-https://github.com/${REPO}.git}
BRANCH=${BRANCH:-master}
# Other options
CHSH=${CHSH:-yes}
RUNZSH=${RUNZSH:-yes}
KEEP_ZSHRC=${KEEP_ZSHRC:-no}
OVERWRITE_CONFIRMATION=${OVERWRITE_CONFIRMATION:-yes}
command_exists() {
command -v "$@" >/dev/null 2>&1
}
user_can_sudo() {
# Check if sudo is installed
command_exists sudo || return 1
# Termux can't run sudo, so we can detect it and exit the function early.
case "$PREFIX" in
*com.termux*) return 1 ;;
esac
# The following command has 3 parts:
#
# 1. Run `sudo` with `-v`. Does the following:
# • with privilege: asks for a password immediately.
# • without privilege: exits with error code 1 and prints the message:
# Sorry, user <username> may not run sudo on <hostname>
#
# 2. Pass `-n` to `sudo` to tell it to not ask for a password. If the
# password is not required, the command will finish with exit code 0.
# If one is required, sudo will exit with error code 1 and print the
# message:
# sudo: a password is required
#
# 3. Check for the words "may not run sudo" in the output to really tell
# whether the user has privileges or not. For that we have to make sure
# to run `sudo` in the default locale (with `LANG=`) so that the message
# stays consistent regardless of the user's locale.
#
! LANG= sudo -n -v 2>&1 | grep -q "may not run sudo"
}
# The [ -t 1 ] check only works when the function is not called from
# a subshell (like in `$(...)` or `(...)`, so this hack redefines the
# function at the top level to always return false when stdout is not
# a tty.
if [ -t 1 ]; then
is_tty() {
true
}
else
is_tty() {
false
}
fi
# This function uses the logic from supports-hyperlinks[1][2], which is
# made by Kat Marchán (@zkat) and licensed under the Apache License 2.0.
# [1] https://github.com/zkat/supports-hyperlinks
# [2] https://crates.io/crates/supports-hyperlinks
#
# Copyright (c) 2021 Kat Marchán
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
supports_hyperlinks() {
# $FORCE_HYPERLINK must be set and be non-zero (this acts as a logic bypass)
if [ -n "$FORCE_HYPERLINK" ]; then
[ "$FORCE_HYPERLINK" != 0 ]
return $?
fi
# If stdout is not a tty, it doesn't support hyperlinks
is_tty || return 1
# DomTerm terminal emulator (domterm.org)
if [ -n "$DOMTERM" ]; then
return 0
fi
# VTE-based terminals above v0.50 (Gnome Terminal, Guake, ROXTerm, etc)
if [ -n "$VTE_VERSION" ]; then
[ $VTE_VERSION -ge 5000 ]
return $?
fi
# If $TERM_PROGRAM is set, these terminals support hyperlinks
case "$TERM_PROGRAM" in
Hyper|iTerm.app|terminology|WezTerm|vscode) return 0 ;;
esac
# These termcap entries support hyperlinks
case "$TERM" in
xterm-kitty|alacritty|alacritty-direct) return 0 ;;
esac
# xfce4-terminal supports hyperlinks
if [ "$COLORTERM" = "xfce4-terminal" ]; then
return 0
fi
# Windows Terminal also supports hyperlinks
if [ -n "$WT_SESSION" ]; then
return 0
fi
# Konsole supports hyperlinks, but it's an opt-in setting that can't be detected
# https://github.com/ohmyzsh/ohmyzsh/issues/10964
# if [ -n "$KONSOLE_VERSION" ]; then
# return 0
# fi
return 1
}
# Adapted from code and information by Anton Kochkov (@XVilka)
# Source: https://gist.github.com/XVilka/8346728
supports_truecolor() {
case "$COLORTERM" in
truecolor|24bit) return 0 ;;
esac
case "$TERM" in
iterm |\
tmux-truecolor |\
linux-truecolor |\
xterm-truecolor |\
screen-truecolor) return 0 ;;
esac
return 1
}
fmt_link() {
# $1: text, $2: url, $3: fallback mode
if supports_hyperlinks; then
printf '\033]8;;%s\033\\%s\033]8;;\033\\\n' "$2" "$1"
return
fi
case "$3" in
--text) printf '%s\n' "$1" ;;
--url|*) fmt_underline "$2" ;;
esac
}
fmt_underline() {
is_tty && printf '\033[4m%s\033[24m\n' "$*" || printf '%s\n' "$*"
}
# shellcheck disable=SC2016 # backtick in single-quote
fmt_code() {
is_tty && printf '`\033[2m%s\033[22m`\n' "$*" || printf '`%s`\n' "$*"
}
fmt_error() {
printf '%sError: %s%s\n' "${FMT_BOLD}${FMT_RED}" "$*" "$FMT_RESET" >&2
}
setup_color() {
# Only use colors if connected to a terminal
if ! is_tty; then
FMT_RAINBOW=""
FMT_RED=""
FMT_GREEN=""
FMT_YELLOW=""
FMT_BLUE=""
FMT_BOLD=""
FMT_RESET=""
return
fi
if supports_truecolor; then
FMT_RAINBOW="
$(printf '\033[38;2;255;0;0m')
$(printf '\033[38;2;255;97;0m')
$(printf '\033[38;2;247;255;0m')
$(printf '\033[38;2;0;255;30m')
$(printf '\033[38;2;77;0;255m')
$(printf '\033[38;2;168;0;255m')
$(printf '\033[38;2;245;0;172m')
"
else
FMT_RAINBOW="
$(printf '\033[38;5;196m')
$(printf '\033[38;5;202m')
$(printf '\033[38;5;226m')
$(printf '\033[38;5;082m')
$(printf '\033[38;5;021m')
$(printf '\033[38;5;093m')
$(printf '\033[38;5;163m')
"
fi
FMT_RED=$(printf '\033[31m')
FMT_GREEN=$(printf '\033[32m')
FMT_YELLOW=$(printf '\033[33m')
FMT_BLUE=$(printf '\033[34m')
FMT_BOLD=$(printf '\033[1m')
FMT_RESET=$(printf '\033[0m')
}
setup_ohmyzsh() {
# Prevent the cloned repository from having insecure permissions. Failing to do
# so causes compinit() calls to fail with "command not found: compdef" errors
# for users with insecure umasks (e.g., "002", allowing group writability). Note
# that this will be ignored under Cygwin by default, as Windows ACLs take
# precedence over umasks except for filesystems mounted with option "noacl".
umask g-w,o-w
echo "${FMT_BLUE}Cloning Oh My Zsh...${FMT_RESET}"
command_exists git || {
fmt_error "git is not installed"
exit 1
}
ostype=$(uname)
if [ -z "${ostype%CYGWIN*}" ] && git --version | grep -Eq 'msysgit|windows'; then
fmt_error "Windows/MSYS Git is not supported on Cygwin"
fmt_error "Make sure the Cygwin git package is installed and is first on the \$PATH"
exit 1
fi
# Manual clone with git config options to support git < v1.7.2
git init --quiet "$ZSH" && cd "$ZSH" \
&& git config core.eol lf \
&& git config core.autocrlf false \
&& git config fsck.zeroPaddedFilemode ignore \
&& git config fetch.fsck.zeroPaddedFilemode ignore \
&& git config receive.fsck.zeroPaddedFilemode ignore \
&& git config oh-my-zsh.remote origin \
&& git config oh-my-zsh.branch "$BRANCH" \
&& git remote add origin "$REMOTE" \
&& git fetch --depth=1 origin \
&& git checkout -b "$BRANCH" "origin/$BRANCH" || {
[ ! -d "$ZSH" ] || {
cd -
rm -rf "$ZSH" 2>/dev/null
}
fmt_error "git clone of oh-my-zsh repo failed"
exit 1
}
# Exit installation directory
cd -
echo
}
setup_zshrc() {
# Keep most recent old .zshrc at .zshrc.pre-oh-my-zsh, and older ones
# with datestamp of installation that moved them aside, so we never actually
# destroy a user's original zshrc
echo "${FMT_BLUE}Looking for an existing zsh config...${FMT_RESET}"
# Must use this exact name so uninstall.sh can find it
OLD_ZSHRC="$zdot/.zshrc.pre-oh-my-zsh"
if [ -f "$zdot/.zshrc" ] || [ -h "$zdot/.zshrc" ]; then
# Skip this if the user doesn't want to replace an existing .zshrc
if [ "$KEEP_ZSHRC" = yes ]; then
echo "${FMT_YELLOW}Found ${zdot}/.zshrc.${FMT_RESET} ${FMT_GREEN}Keeping...${FMT_RESET}"
return
fi
if [ $OVERWRITE_CONFIRMATION != "no" ]; then
# Ask user for confirmation before backing up and overwriting
echo "${FMT_YELLOW}Found ${zdot}/.zshrc."
echo "The existing .zshrc will be backed up to .zshrc.pre-oh-my-zsh if overwritten."
echo "Make sure your .zshrc contains the following minimal configuration if you choose not to overwrite it:${FMT_RESET}"
echo "----------------------------------------"
cat "$ZSH/templates/minimal.zshrc"
echo "----------------------------------------"
printf '%sDo you want to overwrite it with the Oh My Zsh template? [Y/n]%s ' \
"$FMT_YELLOW" "$FMT_RESET"
read -r opt
case $opt in
[Yy]*|"") ;;
[Nn]*) echo "Overwrite skipped. Existing .zshrc will be kept."; return ;;
*) echo "Invalid choice. Overwrite skipped. Existing .zshrc will be kept."; return ;;
esac
fi
if [ -e "$OLD_ZSHRC" ]; then
OLD_OLD_ZSHRC="${OLD_ZSHRC}-$(date +%Y-%m-%d_%H-%M-%S)"
if [ -e "$OLD_OLD_ZSHRC" ]; then
fmt_error "$OLD_OLD_ZSHRC exists. Can't back up ${OLD_ZSHRC}"
fmt_error "re-run the installer again in a couple of seconds"
exit 1
fi
mv "$OLD_ZSHRC" "${OLD_OLD_ZSHRC}"
echo "${FMT_YELLOW}Found old .zshrc.pre-oh-my-zsh." \
"${FMT_GREEN}Backing up to ${OLD_OLD_ZSHRC}${FMT_RESET}"
fi
echo "${FMT_GREEN}Backing up to ${OLD_ZSHRC}${FMT_RESET}"
mv "$zdot/.zshrc" "$OLD_ZSHRC"
fi
echo "${FMT_GREEN}Using the Oh My Zsh template file and adding it to $zdot/.zshrc.${FMT_RESET}"
# Modify $ZSH variable in .zshrc directory to use the literal $ZDOTDIR or $HOME
omz="$ZSH"
if [ -n "$ZDOTDIR" ] && [ "$ZDOTDIR" != "$HOME" ]; then
omz=$(echo "$omz" | sed "s|^$ZDOTDIR/|\$ZDOTDIR/|")
fi
omz=$(echo "$omz" | sed "s|^$HOME/|\$HOME/|")
sed "s|^export ZSH=.*$|export ZSH=\"${omz}\"|" "$ZSH/templates/zshrc.zsh-template" > "$zdot/.zshrc-omztemp"
mv -f "$zdot/.zshrc-omztemp" "$zdot/.zshrc"
echo
}
setup_shell() {
# Skip setup if the user wants or stdin is closed (not running interactively).
if [ "$CHSH" = no ]; then
return
fi
# If this user's login shell is already "zsh", do not attempt to switch.
if [ "$(basename -- "$SHELL")" = "zsh" ]; then
return
fi
# If this platform doesn't provide a "chsh" command, bail out.
if ! command_exists chsh; then
cat <<EOF
I can't change your shell automatically because this system does not have chsh.
${FMT_BLUE}Please manually change your default shell to zsh${FMT_RESET}
EOF
return
fi
echo "${FMT_BLUE}Time to change your default shell to zsh:${FMT_RESET}"
# Prompt for user choice on changing the default login shell
printf '%sDo you want to change your default shell to zsh? [Y/n]%s ' \
"$FMT_YELLOW" "$FMT_RESET"
read -r opt
case $opt in
[Yy]*|"") ;;
[Nn]*) echo "Shell change skipped."; return ;;
*) echo "Invalid choice. Shell change skipped."; return ;;
esac
# Check if we're running on Termux
case "$PREFIX" in
*com.termux*) termux=true; zsh=zsh ;;
*) termux=false ;;
esac
if [ "$termux" != true ]; then
# Test for the right location of the "shells" file
if [ -f /etc/shells ]; then
shells_file=/etc/shells
elif [ -f /usr/share/defaults/etc/shells ]; then # Solus OS
shells_file=/usr/share/defaults/etc/shells
else
fmt_error "could not find /etc/shells file. Change your default shell manually."
return
fi
# Get the path to the right zsh binary
# 1. Use the most preceding one based on $PATH, then check that it's in the shells file
# 2. If that fails, get a zsh path from the shells file, then check it actually exists
if ! zsh=$(command -v zsh) || ! grep -qx "$zsh" "$shells_file"; then
if ! zsh=$(grep '^/.*/zsh$' "$shells_file" | tail -n 1) || [ ! -f "$zsh" ]; then
fmt_error "no zsh binary found or not present in '$shells_file'"
fmt_error "change your default shell manually."
return
fi
fi
fi
# We're going to change the default shell, so back up the current one
if [ -n "$SHELL" ]; then
echo "$SHELL" > "$zdot/.shell.pre-oh-my-zsh"
else
grep "^$USER:" /etc/passwd | awk -F: '{print $7}' > "$zdot/.shell.pre-oh-my-zsh"
fi
echo "Changing your shell to $zsh..."
# Check if user has sudo privileges to run `chsh` with or without `sudo`
#
# This allows the call to succeed without password on systems where the
# user does not have a password but does have sudo privileges, like in
# Google Cloud Shell.
#
# On systems that don't have a user with passwordless sudo, the user will
# be prompted for the password either way, so this shouldn't cause any issues.
#
if user_can_sudo; then
sudo -k chsh -s "$zsh" "$USER" # -k forces the password prompt
else
chsh -s "$zsh" "$USER" # run chsh normally
fi
# Check if the shell change was successful
if [ $? -ne 0 ]; then
fmt_error "chsh command unsuccessful. Change your default shell manually."
else
export SHELL="$zsh"
echo "${FMT_GREEN}Shell successfully changed to '$zsh'.${FMT_RESET}"
fi
echo
}
# shellcheck disable=SC2183 # printf string has more %s than arguments ($FMT_RAINBOW expands to multiple arguments)
print_success() {
printf '%s %s__ %s %s %s %s %s__ %s\n' $FMT_RAINBOW $FMT_RESET
printf '%s ____ %s/ /_ %s ____ ___ %s__ __ %s ____ %s_____%s/ /_ %s\n' $FMT_RAINBOW $FMT_RESET
printf '%s / __ \\%s/ __ \\ %s / __ `__ \\%s/ / / / %s /_ / %s/ ___/%s __ \\ %s\n' $FMT_RAINBOW $FMT_RESET
printf '%s/ /_/ /%s / / / %s / / / / / /%s /_/ / %s / /_%s(__ )%s / / / %s\n' $FMT_RAINBOW $FMT_RESET
printf '%s\\____/%s_/ /_/ %s /_/ /_/ /_/%s\\__, / %s /___/%s____/%s_/ /_/ %s\n' $FMT_RAINBOW $FMT_RESET
printf '%s %s %s %s /____/ %s %s %s %s....is now installed!%s\n' $FMT_RAINBOW $FMT_GREEN $FMT_RESET
printf '\n'
printf '\n'
printf "%s %s %s\n" "Before you scream ${FMT_BOLD}${FMT_YELLOW}Oh My Zsh!${FMT_RESET} look over the" \
"$(fmt_code "$(fmt_link ".zshrc" "file://$zdot/.zshrc" --text)")" \
"file to select plugins, themes, and options."
printf '\n'
printf '%s\n' "• Follow us on X: $(fmt_link @ohmyzsh https://x.com/ohmyzsh)"
printf '%s\n' "• Join our Discord community: $(fmt_link "Discord server" https://discord.gg/ohmyzsh)"
printf '%s\n' "• Get stickers, t-shirts, coffee mugs and more: $(fmt_link "Planet Argon Shop" https://shop.planetargon.com/collections/oh-my-zsh)"
printf '%s\n' $FMT_RESET
}
main() {
# Run as unattended if stdin is not a tty
if [ ! -t 0 ]; then
RUNZSH=no
CHSH=no
OVERWRITE_CONFIRMATION=no
fi
# Parse arguments
while [ $# -gt 0 ]; do
case $1 in
--unattended) RUNZSH=no; CHSH=no; OVERWRITE_CONFIRMATION=no ;;
--skip-chsh) CHSH=no ;;
--keep-zshrc) KEEP_ZSHRC=yes ;;
esac
shift
done
setup_color
if ! command_exists zsh; then
echo "${FMT_YELLOW}Zsh is not installed.${FMT_RESET} Please install zsh first."
exit 1
fi
if [ -d "$ZSH" ]; then
echo "${FMT_YELLOW}The \$ZSH folder already exists ($ZSH).${FMT_RESET}"
if [ "$custom_zsh" = yes ]; then
cat <<EOF
You ran the installer with the \$ZSH setting or the \$ZSH variable is
exported. You have 3 options:
1. Unset the ZSH variable when calling the installer:
$(fmt_code "ZSH= sh install.sh")
2. Install Oh My Zsh to a directory that doesn't exist yet:
$(fmt_code "ZSH=path/to/new/ohmyzsh/folder sh install.sh")
3. (Caution) If the folder doesn't contain important information,
you can just remove it with $(fmt_code "rm -r $ZSH")
EOF
else
echo "You'll need to remove it if you want to reinstall."
fi
exit 1
fi
# Create ZDOTDIR folder structure if it doesn't exist
if [ -n "$ZDOTDIR" ]; then
mkdir -p "$ZDOTDIR"
fi
setup_ohmyzsh
setup_zshrc
setup_shell
print_success
if [ $RUNZSH = no ]; then
echo "${FMT_YELLOW}Run zsh to try it out.${FMT_RESET}"
exit
fi
exec zsh -l
}
main "$@"

View File

@@ -15,7 +15,8 @@ use std::time::Duration;
use crate::docker::serverclientcomm::handle_server_message; use crate::docker::serverclientcomm::handle_server_message;
use crate::hardware::HardwareInfo; use crate::hardware::HardwareInfo;
use crate::models::{ use crate::models::{
Acknowledgment, HeartbeatDto, IdResponse, MetricDto, RegistrationDto, ServerMessage, Acknowledgment, DockerMetricDto, DockerRegistrationDto, HeartbeatDto,
IdResponse, MetricDto, RegistrationDto, ServerMessage,
}; };
use anyhow::Result; use anyhow::Result;
@@ -39,7 +40,7 @@ use bollard::Docker;
/// Returns an error if unable to register after repeated attempts. /// Returns an error if unable to register after repeated attempts.
pub async fn register_with_server( pub async fn register_with_server(
base_url: &str, base_url: &str,
) -> Result<(i32, String), Box<dyn Error + Send + Sync>> { ) -> Result<(u16, String), Box<dyn Error + Send + Sync>> {
// First get local IP // First get local IP
let ip = local_ip_address::local_ip()?.to_string(); let ip = local_ip_address::local_ip()?.to_string();
println!("Local IP address detected: {}", ip); println!("Local IP address detected: {}", ip);
@@ -103,7 +104,7 @@ pub async fn register_with_server(
async fn get_server_id_by_ip( async fn get_server_id_by_ip(
base_url: &str, base_url: &str,
ip: &str, ip: &str,
) -> Result<(i32, String), Box<dyn Error + Send + Sync>> { ) -> Result<(u16, String), Box<dyn Error + Send + Sync>> {
let client = Client::builder() let client = Client::builder()
.danger_accept_invalid_certs(true) .danger_accept_invalid_certs(true)
.build()?; .build()?;
@@ -151,6 +152,89 @@ async fn get_server_id_by_ip(
} }
} }
/// Broadcasts Docker container information to the monitoring server for service discovery.
///
/// This function sends the current Docker container configuration to the server
/// to register available containers and enable service monitoring. It will
/// continuously retry until successful, making it suitable for initial
/// registration scenarios.
///
/// # Arguments
///
/// * `base_url` - The base URL of the monitoring server API (e.g., "https://monitoring.example.com")
/// * `server_id` - The ID of the server to associate the containers with
/// * `container_dto` - Mutable reference to Docker container information for broadcast
///
/// # Returns
///
/// * `Ok(())` - When container information is successfully broadcasted to the server
/// * `Err(Box<dyn Error + Send + Sync>)` - If an unrecoverable error occurs (though the function typically retries on transient failures)
///
/// # Behavior
///
/// This function operates in a retry loop with the following characteristics:
///
/// - **Retry Logic**: Attempts broadcast every 10 seconds until successful
/// - **Mutation**: Modifies the `container_dto` to set the `server_id` before sending
/// - **TLS**: Accepts invalid TLS certificates for development environments
/// - **Logging**: Provides detailed console output about broadcast attempts and results
///
/// # Errors
///
/// This function may return an error in the following cases:
///
/// * **HTTP Client Creation**: Failed to create HTTP client with TLS configuration
/// * **Network Issues**: Persistent connection failures to the backend server
/// * **Server Errors**: Backend returns non-success HTTP status codes repeatedly
/// * **JSON Serialization**: Cannot serialize container data (should be rare with proper DTOs)
pub async fn broadcast_docker_containers(
base_url: &str,
server_id: u16,
container_dto: &DockerRegistrationDto,
) -> Result<(), Box<dyn Error + Send + Sync>> {
// First get local IP
println!("Preparing to broadcast docker containers...");
// Create HTTP client for registration
let client = Client::builder()
.danger_accept_invalid_certs(true)
.build()?;
// Prepare registration data
let mut broadcast_data = container_dto.clone();
broadcast_data.server_id = server_id;
// Try to register (will retry on failure)
loop {
println!("Attempting to broadcast containers...");
let json_body = serde_json::to_string_pretty(&broadcast_data)?;
println!("📤 JSON being posted:\n{}", json_body);
let url = format!("{}/monitoring/service-discovery", base_url);
match client.post(&url).json(&container_dto).send().await {
Ok(resp) if resp.status().is_success() => {
println!(
"✅ Successfully broadcasted following docker container: {:?}",
container_dto
);
return Ok(());
}
Ok(resp) => {
let status = resp.status();
let text = resp.text().await.unwrap_or_default();
println!(
"⚠️ Broadcasting failed ({}): {} (will retry in 10 seconds)",
status, text
);
}
Err(err) => {
println!("⚠️ Broadcasting failed: {} (will retry in 10 seconds)", err);
}
}
sleep(Duration::from_secs(10)).await;
}
}
/// Periodically sends heartbeat signals to the backend server to indicate agent liveness. /// Periodically sends heartbeat signals to the backend server to indicate agent liveness.
/// ///
/// This function runs in a background task and will retry on network errors. /// This function runs in a background task and will retry on network errors.
@@ -308,7 +392,7 @@ pub async fn listening_to_server(
/// ///
/// # Returns /// # Returns
/// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if acknowledgment is sent successfully. /// * `Result<(), Box<dyn Error + Send + Sync>>` - Ok if acknowledgment is sent successfully.
async fn send_acknowledgment( pub async fn send_acknowledgment(
client: &reqwest::Client, client: &reqwest::Client,
base_url: &str, base_url: &str,
message_id: &str, message_id: &str,
@@ -339,3 +423,24 @@ async fn send_acknowledgment(
Ok(()) Ok(())
} }
pub async fn send_docker_metrics(
base_url: &str,
docker_metrics: &DockerMetricDto,
) -> Result<(), Box<dyn Error + Send + Sync>> {
let client = Client::new();
let url = format!("{}/monitoring/docker-metric", base_url);
println!("Docker Metrics: {}", serde_json::to_string_pretty(&docker_metrics)?);
match client.post(&url).json(&docker_metrics).send().await {
Ok(res) => println!(
"✅ Sent docker metrics for server {} | Status: {}",
docker_metrics.server_id,
res.status()
),
Err(err) => eprintln!("❌ Failed to send docker metrics: {}", err),
}
Ok(())
}

View File

@@ -4,7 +4,7 @@
//! //!
use crate::docker::stats; use crate::docker::stats;
use crate::docker::stats::{ContainerCpuInfo, ContainerNetworkInfo}; use crate::docker::stats::{ContainerCpuInfo, ContainerNetworkInfo};
use crate::models::{DockerContainerDto, DockerContainerRegistrationDto}; use crate::models::DockerContainer;
use bollard::query_parameters::{ use bollard::query_parameters::{
CreateImageOptions, ListContainersOptions, RestartContainerOptions, CreateImageOptions, ListContainersOptions, RestartContainerOptions,
@@ -20,7 +20,7 @@ use std::error::Error;
/// ///
/// # Returns /// # Returns
/// * `Vec<DockerContainer>` - Vector of Docker container info. /// * `Vec<DockerContainer>` - Vector of Docker container info.
pub async fn get_available_containers(docker: &Docker) -> Vec<DockerContainerDto> { pub async fn get_available_containers(docker: &Docker) -> Vec<DockerContainer> {
println!("=== DOCKER CONTAINER LIST ==="); println!("=== DOCKER CONTAINER LIST ===");
let options = Some(ListContainersOptions { let options = Some(ListContainersOptions {
@@ -51,29 +51,10 @@ pub async fn get_available_containers(docker: &Docker) -> Vec<DockerContainerDto
.map(|img| img.to_string()) .map(|img| img.to_string())
.unwrap_or_else(|| "unknown".to_string()); .unwrap_or_else(|| "unknown".to_string());
/*let status = container Some(DockerContainer {
.status
.as_ref()
.map(|s| match s.to_lowercase().as_str() {
s if s.contains("up") || s.contains("running") => "running".to_string(),
s if s.contains("exited") || s.contains("stopped") => {
"stopped".to_string()
}
_ => s.to_string(),
})
.unwrap_or_else(|| "unknown".to_string());
println!(
" - ID: {}, Image: {}, Name: {}",
short_id,
container.image.unwrap(),
name
);*/
Some(DockerContainerDto {
id: short_id.to_string(), id: short_id.to_string(),
image, image: Some(image),
name: name, name: Some(name),
}) })
}) })
.collect() .collect()
@@ -191,20 +172,21 @@ pub async fn get_network_stats(
docker: &Docker, docker: &Docker,
container_id: &str, container_id: &str,
) -> Result<ContainerNetworkInfo, Box<dyn Error + Send + Sync>> { ) -> Result<ContainerNetworkInfo, Box<dyn Error + Send + Sync>> {
let (_, net_info) = stats::get_single_container_stats(docker, container_id).await?; let (_, net_info, _, _) = stats::get_single_container_stats(docker, container_id).await?;
if let Some(net_info) = net_info { if let Some(net_info) = net_info {
Ok(net_info) Ok(net_info)
} else { } else {
// Return default network info if not found // Return default network info if not found
println!("No network info found for container {}", container_id);
Ok(ContainerNetworkInfo { Ok(ContainerNetworkInfo {
container_id: container_id.to_string(), container_id: Some(container_id.to_string()),
rx_bytes: 0, rx_bytes: None,
tx_bytes: 0, tx_bytes: None,
rx_packets: 0, rx_packets: None,
tx_packets: 0, tx_packets: None,
rx_errors: 0, rx_errors: None,
tx_errors: 0, tx_errors: None,
}) })
} }
} }
@@ -214,18 +196,19 @@ pub async fn get_cpu_stats(
docker: &Docker, docker: &Docker,
container_id: &str, container_id: &str,
) -> Result<ContainerCpuInfo, Box<dyn Error + Send + Sync>> { ) -> Result<ContainerCpuInfo, Box<dyn Error + Send + Sync>> {
let (cpu_info, _) = stats::get_single_container_stats(docker, container_id).await?; let (cpu_info, _, _, _) = stats::get_single_container_stats(docker, container_id).await?;
if let Some(cpu_info) = cpu_info { if let Some(cpu_info) = cpu_info {
Ok(cpu_info) Ok(cpu_info)
} else { } else {
// Return default CPU info if not found // Return default CPU info if not found
println!("No CPU info found for container {}", container_id);
Ok(ContainerCpuInfo { Ok(ContainerCpuInfo {
container_id: container_id.to_string(), container_id: Some(container_id.to_string()),
cpu_usage_percent: 0.0, cpu_usage_percent: None,
system_cpu_usage: 0, system_cpu_usage: None,
container_cpu_usage: 0, container_cpu_usage: None,
online_cpus: 1, online_cpus: None,
}) })
} }
} }

View File

@@ -11,8 +11,12 @@ pub mod container;
pub mod serverclientcomm; pub mod serverclientcomm;
pub mod stats; pub mod stats;
use crate::models::{DockerContainerDto, DockerContainerMetricDto}; use crate::models::{
use bollard::{query_parameters::InspectContainerOptions, Docker}; DockerCollectMetricDto, DockerContainer, DockerContainerCpuDto, DockerContainerInfo,
DockerContainerNetworkDto, DockerContainerRamDto, DockerMetricDto, DockerRegistrationDto,
DockerContainerStatusDto
};
use bollard::Docker;
use std::error::Error; use std::error::Error;
/// Main Docker manager that holds the Docker client and provides all operations /// Main Docker manager that holds the Docker client and provides all operations
@@ -49,14 +53,14 @@ impl DockerManager {
/// Finds the Docker container running the agent by image name /// Finds the Docker container running the agent by image name
pub async fn get_client_container( pub async fn get_client_container(
&self, &self,
) -> Result<Option<DockerContainerDto>, Box<dyn Error + Send + Sync>> { ) -> Result<Option<DockerContainer>, Box<dyn Error + Send + Sync>> {
let containers = container::get_available_containers(&self.docker).await; let containers = container::get_available_containers(&self.docker).await;
let client_image = "watcher-agent"; let client_image = "watcher-agent";
Ok(containers Ok(containers
.into_iter() .into_iter()
.find(|c| c.image.contains(client_image)) .find(|c| c.clone().image.unwrap().contains(client_image))
.map(|container| DockerContainerDto { .map(|container| DockerContainer {
id: container.id, id: container.id,
image: container.image, image: container.image,
name: container.name, name: container.name,
@@ -66,7 +70,14 @@ impl DockerManager {
/// Gets the current client version (image name) if running in Docker /// Gets the current client version (image name) if running in Docker
pub async fn get_client_version(&self) -> String { pub async fn get_client_version(&self) -> String {
match self.get_client_container().await { match self.get_client_container().await {
Ok(Some(container)) => container.image, Ok(Some(container)) => container
.image
.clone()
.unwrap()
.split(':')
.next()
.unwrap_or("unknown")
.to_string(),
Ok(None) => { Ok(None) => {
println!("Warning: No WatcherAgent container found"); println!("Warning: No WatcherAgent container found");
"unknown".to_string() "unknown".to_string()
@@ -87,14 +98,14 @@ impl DockerManager {
} }
/// Gets all available containers as DTOs for registration /// Gets all available containers as DTOs for registration
pub async fn get_containers_for_registration( pub async fn get_containers(
&self, &self,
) -> Result<Vec<DockerContainerDto>, Box<dyn Error + Send + Sync>> { ) -> Result<Vec<DockerContainer>, Box<dyn Error + Send + Sync>> {
let containers = container::get_available_containers(&self.docker).await; let containers = container::get_available_containers(&self.docker).await;
Ok(containers Ok(containers
.into_iter() .into_iter()
.map(|container| DockerContainerDto { .map(|container| DockerContainer {
id: container.id, id: container.id,
image: container.image, image: container.image,
name: container.name, name: container.name,
@@ -102,61 +113,6 @@ impl DockerManager {
.collect()) .collect())
} }
/// Gets container metrics for all containers
pub async fn get_container_metrics(
&self,
) -> Result<Vec<DockerContainerMetricDto>, Box<dyn Error + Send + Sync>> {
let containers = container::get_available_containers(&self.docker).await;
let mut metrics = Vec::new();
for container in containers {
// Get network stats (you'll need to implement this in container.rs)
let network_stats = container::get_network_stats(&self.docker, &container.id).await?;
// Get CPU stats (you'll need to implement this in container.rs)
let cpu_stats = container::get_cpu_stats(&self.docker, &container.id).await?;
// Get current status by inspecting the container
let status = match self
.docker
.inspect_container(&container.id, None::<InspectContainerOptions>)
.await
{
Ok(container_info) => {
// Extract status from container state and convert to string
container_info
.state
.and_then(|state| state.status)
.map(|status_enum| {
match status_enum {
bollard::models::ContainerStateStatusEnum::CREATED => "created",
bollard::models::ContainerStateStatusEnum::RUNNING => "running",
bollard::models::ContainerStateStatusEnum::PAUSED => "paused",
bollard::models::ContainerStateStatusEnum::RESTARTING => {
"restarting"
}
bollard::models::ContainerStateStatusEnum::REMOVING => "removing",
bollard::models::ContainerStateStatusEnum::EXITED => "exited",
bollard::models::ContainerStateStatusEnum::DEAD => "dead",
bollard::secret::ContainerStateStatusEnum::EMPTY => todo!(),
}
.to_string()
})
.unwrap_or_else(|| "unknown".to_string())
}
Err(_) => "unknown".to_string(),
};
metrics.push(DockerContainerMetricDto {
id: container.id,
status: status,
network: network_stats,
cpu: cpu_stats,
});
}
Ok(metrics)
}
/// Gets the number of running containers /// Gets the number of running containers
pub async fn get_container_count(&self) -> Result<usize, Box<dyn Error + Send + Sync>> { pub async fn get_container_count(&self) -> Result<usize, Box<dyn Error + Send + Sync>> {
let containers = container::get_available_containers(&self.docker).await; let containers = container::get_available_containers(&self.docker).await;
@@ -171,21 +127,200 @@ impl DockerManager {
container::restart_container(&self.docker, container_id).await container::restart_container(&self.docker, container_id).await
} }
/// Gets total network statistics across all containers /// Collects Docker metrics for all containers
pub async fn get_total_network_stats( pub async fn collect_metrics(&self) -> Result<DockerMetricDto, Box<dyn Error + Send + Sync>> {
let containers = self.get_containers().await?;
// Get stats with status information
let stats_result = stats::get_container_stats(&self.docker).await;
let (cpu_stats, net_stats, mem_stats, status_stats) = match stats_result {
Ok(stats) => stats,
Err(e) => {
eprintln!("Warning: Failed to get container stats: {}", e);
// Return empty stats instead of failing completely
(Vec::new(), Vec::new(), Vec::new(), Vec::new())
}
};
println!(
"Debug: Found {} containers, {} CPU stats, {} network stats, {} memory stats, {} status stats",
containers.len(),
cpu_stats.len(),
net_stats.len(),
mem_stats.len(),
status_stats.len(),
);
let container_infos_total: Vec<_> = containers
.into_iter()
.map(|container| {
// Use short ID for matching (first 12 chars)
let container_short_id = if container.id.len() > 12 {
&container.id[..12]
} else {
&container.id
};
let cpu = cpu_stats
.iter()
.find(|c| {
c.container_id
.as_ref()
.map(|id| id.starts_with(container_short_id))
.unwrap_or(false)
})
.cloned();
let network = net_stats
.iter()
.find(|n| {
n.container_id
.as_ref()
.map(|id| id.starts_with(container_short_id))
.unwrap_or(false)
})
.cloned();
let ram = mem_stats
.iter()
.find(|m| {
m.container_id
.as_ref()
.map(|id| id.starts_with(container_short_id))
.unwrap_or(false)
})
.cloned();
let status = status_stats
.iter()
.find(|s| {
s.container_id
.as_ref()
.map(|id| id.starts_with(container_short_id))
.unwrap_or(false)
})
.cloned(); // Clone the entire ContainerStatusInfo
// Debug output for this container
if cpu.is_none() || network.is_none() || ram.is_none() {
println!(
"Debug: Container {} - CPU: {:?}, Network: {:?}, RAM: {:?}, Status {:?}",
container_short_id,
cpu.is_some(),
network.is_some(),
ram.is_some(),
status.is_some()
);
}
// Debug output for this container
if cpu.is_none() || network.is_none() || ram.is_none() || status.is_none() {
println!(
"Debug: Container {} - CPU: {:?}, Network: {:?}, RAM: {:?}, Status: {:?}",
container_short_id,
cpu.is_some(),
network.is_some(),
ram.is_some(),
status.is_some()
);
}
DockerContainerInfo {
container: Some(container),
status,
cpu,
network,
ram,
}
})
.collect();
let container_infos: Vec<DockerCollectMetricDto> = container_infos_total
.into_iter()
.filter_map(|info| {
let _container = match info.container {
Some(c) => c,
None => {
eprintln!("Warning: Container info missing container data, skipping");
return None;
}
};
// Safely handle CPU data with defaults
let cpu_dto = if let Some(cpu) = info.cpu {
DockerContainerCpuDto {
cpu_load: cpu.cpu_usage_percent,
}
} else {
DockerContainerCpuDto { cpu_load: None }
};
// Safely handle RAM data with defaults
let ram_dto = if let Some(ram) = info.ram {
DockerContainerRamDto {
ram_load: ram.memory_usage_percent,
}
} else {
DockerContainerRamDto { ram_load: None }
};
// Safely handle network data with defaults
let network_dto = if let Some(net) = info.network {
DockerContainerNetworkDto {
net_in: net.rx_bytes.map(|bytes| bytes as f64),
net_out: net.tx_bytes.map(|bytes| bytes as f64),
}
} else {
DockerContainerNetworkDto {
net_in: None,
net_out: None,
}
};
let status_dto = if let Some(status_info) = info.status {
DockerContainerStatusDto {
status: status_info.status, // Extract the status string
}
} else {
DockerContainerStatusDto { status: None }
};
Some(DockerCollectMetricDto {
server_id: 0,
status: status_dto,
cpu: cpu_dto,
ram: ram_dto,
network: network_dto,
})
})
.collect();
let dto = DockerMetricDto {
server_id: 0, // This should be set by the caller
containers: serde_json::to_value(&container_infos)?,
};
Ok(dto)
}
pub async fn create_registration_dto(
&self, &self,
) -> Result<(u64, u64), Box<dyn Error + Send + Sync>> { ) -> Result<DockerRegistrationDto, Box<dyn Error + Send + Sync>> {
let metrics = self.get_container_metrics().await?; let containers = self.get_containers().await?;
let net_in_total: u64 = metrics.iter().map(|m| m.network.rx_bytes).sum(); let container_string = serde_json::to_value(&containers)?;
let net_out_total: u64 = metrics.iter().map(|m| m.network.tx_bytes).sum();
Ok((net_in_total, net_out_total)) let dto = DockerRegistrationDto {
server_id: 0, // This will be set by the caller
containers: container_string,
};
Ok(dto)
} }
} }
// Keep these as utility functions if needed, but they should use DockerManager internally // Keep these as utility functions if needed, but they should use DockerManager internally
impl DockerContainerDto { impl DockerContainer {
/// Returns the container ID /// Returns the container ID
pub fn id(&self) -> &str { pub fn id(&self) -> &str {
&self.id &self.id
@@ -193,11 +328,11 @@ impl DockerContainerDto {
/// Returns the image name /// Returns the image name
pub fn image(&self) -> &str { pub fn image(&self) -> &str {
&self.image &self.image.as_deref().unwrap_or("unknown")
} }
/// Returns the container name /// Returns the container name
pub fn name(&self) -> &str { pub fn name(&self) -> &str {
&self.name &self.name.as_deref().unwrap_or("unknown")
} }
} }

View File

@@ -5,7 +5,7 @@
use crate::models::ServerMessage; use crate::models::ServerMessage;
use super::container::{restart_container, update_docker_image}; use super::container::{restart_container, update_docker_image};
use bollard::query_parameters::{CreateImageOptions, RestartContainerOptions}; //use bollard::query_parameters::{CreateImageOptions, RestartContainerOptions};
use bollard::Docker; use bollard::Docker;
use std::error::Error; use std::error::Error;
@@ -40,7 +40,7 @@ pub async fn handle_server_message(
if let Some(image_name) = msg.data.get("image").and_then(|v| v.as_str()) { if let Some(image_name) = msg.data.get("image").and_then(|v| v.as_str()) {
println!("Received restart command for image: {}", image_name); println!("Received restart command for image: {}", image_name);
// Call your update_docker_image function here // Call your update_docker_image function here
update_docker_image(docker, image_name).await?; restart_container(docker, image_name).await?;
Ok(()) Ok(())
} else { } else {
Err("Missing image name in update message".into()) Err("Missing image name in update message".into())

View File

@@ -1,206 +0,0 @@
use bollard::query_parameters::{ListContainersOptions, StatsOptions};
use bollard::Docker;
use futures_util::stream::TryStreamExt;
use serde::{Deserialize, Serialize};
use std::error::Error;
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ContainerCpuInfo {
pub container_id: String,
pub cpu_usage_percent: f64,
pub system_cpu_usage: u64,
pub container_cpu_usage: u64,
pub online_cpus: u32,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ContainerNetworkInfo {
pub container_id: String,
pub rx_bytes: u64,
pub tx_bytes: u64,
pub rx_packets: u64,
pub tx_packets: u64,
pub rx_errors: u64,
pub tx_errors: u64,
}
/// Get container statistics for all containers using an existing Docker client
pub async fn get_container_stats(
docker: &Docker,
) -> Result<(Vec<ContainerCpuInfo>, Vec<ContainerNetworkInfo>), Box<dyn Error + Send + Sync>> {
let containers = docker
.list_containers(Some(ListContainersOptions {
all: true,
..Default::default()
}))
.await?;
let mut cpu_infos = Vec::new();
let mut net_infos = Vec::new();
for container in containers {
let id = container.id.unwrap_or_default();
// Skip if no ID
if id.is_empty() {
continue;
}
let mut stats_stream = docker.stats(
&id,
Some(StatsOptions {
stream: false,
one_shot: true,
}),
);
if let Some(stats) = stats_stream.try_next().await? {
// CPU Info
if let (Some(cpu_stats), Some(precpu_stats)) = (&stats.cpu_stats, &stats.precpu_stats) {
if let (Some(cpu_usage), Some(pre_cpu_usage)) =
(&cpu_stats.cpu_usage, &precpu_stats.cpu_usage)
{
let cpu_delta = cpu_usage
.total_usage
.unwrap_or(0)
.saturating_sub(pre_cpu_usage.total_usage.unwrap_or(0));
let system_delta = cpu_stats
.system_cpu_usage
.unwrap_or(0)
.saturating_sub(precpu_stats.system_cpu_usage.unwrap_or(0));
let online_cpus = cpu_stats.online_cpus.unwrap_or(1);
let cpu_percent = if system_delta > 0 && online_cpus > 0 {
(cpu_delta as f64 / system_delta as f64) * online_cpus as f64 * 100.0
} else {
0.0
};
cpu_infos.push(ContainerCpuInfo {
container_id: id.clone(),
cpu_usage_percent: cpu_percent,
system_cpu_usage: cpu_stats.system_cpu_usage.unwrap_or(0),
container_cpu_usage: cpu_usage.total_usage.unwrap_or(0),
online_cpus,
});
}
}
// Network Info
if let Some(networks) = stats.networks {
for (_name, net) in networks {
net_infos.push(ContainerNetworkInfo {
container_id: id.clone(),
rx_bytes: net.rx_bytes.unwrap(),
tx_bytes: net.tx_bytes.unwrap(),
rx_packets: net.rx_packets.unwrap(),
tx_packets: net.tx_packets.unwrap(),
rx_errors: net.rx_errors.unwrap(),
tx_errors: net.tx_errors.unwrap(),
});
}
}
}
}
Ok((cpu_infos, net_infos))
}
/// Get container statistics for a specific container
pub async fn get_single_container_stats(
docker: &Docker,
container_id: &str,
) -> Result<(Option<ContainerCpuInfo>, Option<ContainerNetworkInfo>), Box<dyn Error + Send + Sync>>
{
let mut stats_stream = docker.stats(
container_id,
Some(StatsOptions {
stream: false,
one_shot: true,
}),
);
if let Some(stats) = stats_stream.try_next().await? {
let mut cpu_info = None;
let mut net_info = None;
// CPU Info
if let (Some(cpu_stats), Some(precpu_stats)) = (&stats.cpu_stats, &stats.precpu_stats) {
if let (Some(cpu_usage), Some(pre_cpu_usage)) =
(&cpu_stats.cpu_usage, &precpu_stats.cpu_usage)
{
let cpu_delta = cpu_usage
.total_usage
.unwrap_or(0)
.saturating_sub(pre_cpu_usage.total_usage.unwrap_or(0));
let system_delta = cpu_stats
.system_cpu_usage
.unwrap_or(0)
.saturating_sub(precpu_stats.system_cpu_usage.unwrap_or(0));
let online_cpus = cpu_stats.online_cpus.unwrap_or(1);
let cpu_percent = if system_delta > 0 && online_cpus > 0 {
(cpu_delta as f64 / system_delta as f64) * online_cpus as f64 * 100.0
} else {
0.0
};
cpu_info = Some(ContainerCpuInfo {
container_id: container_id.to_string(),
cpu_usage_percent: cpu_percent,
system_cpu_usage: cpu_stats.system_cpu_usage.unwrap_or(0),
container_cpu_usage: cpu_usage.total_usage.unwrap_or(0),
online_cpus,
});
}
}
// Network Info
if let Some(networks) = stats.networks {
// Take the first network interface (usually eth0)
if let Some((_name, net)) = networks.into_iter().next() {
net_info = Some(ContainerNetworkInfo {
container_id: container_id.to_string(),
rx_bytes: net.rx_bytes.unwrap(),
tx_bytes: net.tx_bytes.unwrap(),
rx_packets: net.rx_packets.unwrap(),
tx_packets: net.tx_packets.unwrap(),
rx_errors: net.rx_errors.unwrap(),
tx_errors: net.tx_errors.unwrap(),
});
}
}
Ok((cpu_info, net_info))
} else {
Ok((None, None))
}
}
/// Get total network statistics across all containers
pub async fn get_total_network_stats(
docker: &Docker,
) -> Result<(u64, u64), Box<dyn Error + Send + Sync>> {
let (_, net_infos) = get_container_stats(docker).await?;
let total_rx: u64 = net_infos.iter().map(|net| net.rx_bytes).sum();
let total_tx: u64 = net_infos.iter().map(|net| net.tx_bytes).sum();
Ok((total_rx, total_tx))
}
/// Get average CPU usage across all containers
pub async fn get_average_cpu_usage(docker: &Docker) -> Result<f64, Box<dyn Error + Send + Sync>> {
let (cpu_infos, _) = get_container_stats(docker).await?;
if cpu_infos.is_empty() {
return Ok(0.0);
}
let total_cpu: f64 = cpu_infos.iter().map(|cpu| cpu.cpu_usage_percent).sum();
Ok(total_cpu / cpu_infos.len() as f64)
}

View File

@@ -0,0 +1,99 @@
use super::ContainerCpuInfo;
use bollard::query_parameters::{ListContainersOptions, StatsOptions};
use bollard::Docker;
use futures_util::stream::TryStreamExt;
use std::error::Error;
/// Get CPU statistics for all containers
pub async fn get_all_containers_cpu_stats(
docker: &Docker,
) -> Result<Vec<ContainerCpuInfo>, Box<dyn Error + Send + Sync>> {
let containers = docker
.list_containers(Some(ListContainersOptions {
all: true,
..Default::default()
}))
.await?;
let mut cpu_infos = Vec::new();
for container in containers {
let id = container.id.unwrap_or_default();
// Skip if no ID
if id.is_empty() {
continue;
}
if let Some(cpu_info) = get_single_container_cpu_stats(docker, &id).await? {
cpu_infos.push(cpu_info);
}
}
Ok(cpu_infos)
}
/// Get CPU statistics for a specific container
pub async fn get_single_container_cpu_stats(
docker: &Docker,
container_id: &str,
) -> Result<Option<ContainerCpuInfo>, Box<dyn Error + Send + Sync>> {
let mut stats_stream = docker.stats(
container_id,
Some(StatsOptions {
stream: false,
one_shot: true,
}),
);
if let Some(stats) = stats_stream.try_next().await? {
if let (Some(cpu_stats), Some(precpu_stats)) = (&stats.cpu_stats, &stats.precpu_stats) {
if let (Some(cpu_usage), Some(pre_cpu_usage)) =
(&cpu_stats.cpu_usage, &precpu_stats.cpu_usage)
{
let cpu_delta = cpu_usage
.total_usage
.unwrap_or(0)
.saturating_sub(pre_cpu_usage.total_usage.unwrap_or(0));
let system_delta = cpu_stats
.system_cpu_usage
.unwrap_or(0)
.saturating_sub(precpu_stats.system_cpu_usage.unwrap_or(0));
let online_cpus = cpu_stats.online_cpus.unwrap_or(1);
let cpu_percent = if system_delta > 0 && online_cpus > 0 {
(cpu_delta as f64 / system_delta as f64) * online_cpus as f64 * 100.0
} else {
0.0
};
return Ok(Some(ContainerCpuInfo {
container_id: Some(container_id.to_string()),
cpu_usage_percent: Some(cpu_percent),
system_cpu_usage: Some(cpu_stats.system_cpu_usage.unwrap_or(0)),
container_cpu_usage: Some(cpu_usage.total_usage.unwrap_or(0)),
online_cpus: Some(online_cpus),
}));
}
}
}
Ok(None)
}
/// Get average CPU usage across all containers
pub async fn get_average_cpu_usage(docker: &Docker) -> Result<f64, Box<dyn Error + Send + Sync>> {
let cpu_infos = get_all_containers_cpu_stats(docker).await?;
if cpu_infos.is_empty() {
return Ok(0.0);
}
let total_cpu: f64 = cpu_infos
.iter()
.map(|cpu| cpu.cpu_usage_percent.unwrap())
.sum();
Ok(total_cpu / cpu_infos.len() as f64)
}

View File

@@ -0,0 +1,101 @@
pub mod cpu;
pub mod network;
pub mod ram;
pub mod status;
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ContainerStatusInfo {
pub container_id: Option<String>,
pub status: Option<String>, // "running", "stopped", "paused", "exited", etc.
pub state: Option<String>, // More detailed state information
pub started_at: Option<String>,
pub finished_at: Option<String>,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ContainerCpuInfo {
pub container_id: Option<String>,
pub cpu_usage_percent: Option<f64>,
pub system_cpu_usage: Option<u64>,
pub container_cpu_usage: Option<u64>,
pub online_cpus: Option<u32>,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ContainerNetworkInfo {
pub container_id: Option<String>,
pub rx_bytes: Option<u64>,
pub tx_bytes: Option<u64>,
pub rx_packets: Option<u64>,
pub tx_packets: Option<u64>,
pub rx_errors: Option<u64>,
pub tx_errors: Option<u64>,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ContainerMemoryInfo {
pub container_id: Option<String>,
pub memory_usage: Option<u64>,
pub memory_limit: Option<u64>,
pub memory_usage_percent: Option<f64>,
}
use bollard::Docker;
use std::error::Error;
/// Get container statistics for all containers using an existing Docker client
pub async fn get_container_stats(
docker: &Docker,
) -> Result<
(
Vec<ContainerCpuInfo>,
Vec<ContainerNetworkInfo>,
Vec<ContainerMemoryInfo>,
Vec<ContainerStatusInfo>,
),
Box<dyn Error + Send + Sync>,
> {
let cpu_infos = cpu::get_all_containers_cpu_stats(docker).await?;
let net_infos = network::get_all_containers_network_stats(docker).await?;
let mem_infos = ram::get_all_containers_memory_stats(docker).await?;
let status_infos = status::get_all_containers_status(docker).await?;
Ok((cpu_infos, net_infos, mem_infos, status_infos))
}
/// Get container statistics for a specific container
pub async fn get_single_container_stats(
docker: &Docker,
container_id: &str,
) -> Result<(
Option<ContainerCpuInfo>,
Option<ContainerNetworkInfo>,
Option<ContainerMemoryInfo>,
Option<ContainerStatusInfo>,
), Box<dyn Error + Send + Sync>> {
let cpu_info = cpu::get_single_container_cpu_stats(docker, container_id).await?;
let net_info = network::get_single_container_network_stats(docker, container_id).await?;
let mem_info = ram::get_single_container_memory_stats(docker, container_id).await?;
let status_info = status::get_single_container_status(docker, container_id).await?;
Ok((cpu_info, net_info, mem_info, status_info))
}
/// Get total network statistics across all containers
pub async fn get_total_network_stats(
docker: &Docker,
) -> Result<(u64, u64), Box<dyn Error + Send + Sync>> {
network::get_total_network_stats(docker).await
}
/// Get average CPU usage across all containers
pub async fn get_average_cpu_usage(docker: &Docker) -> Result<f64, Box<dyn Error + Send + Sync>> {
cpu::get_average_cpu_usage(docker).await
}
/// Get total memory usage across all containers
pub async fn get_total_memory_usage(docker: &Docker) -> Result<u64, Box<dyn Error + Send + Sync>> {
ram::get_total_memory_usage(docker).await
}

View File

@@ -0,0 +1,79 @@
use super::ContainerNetworkInfo;
use bollard::query_parameters::{ListContainersOptions, StatsOptions};
use bollard::Docker;
use futures_util::stream::TryStreamExt;
use std::error::Error;
/// Get network statistics for all containers
pub async fn get_all_containers_network_stats(
docker: &Docker,
) -> Result<Vec<ContainerNetworkInfo>, Box<dyn Error + Send + Sync>> {
let containers = docker
.list_containers(Some(ListContainersOptions {
all: true,
..Default::default()
}))
.await?;
let mut net_infos = Vec::new();
for container in containers {
let id = container.id.unwrap_or_default();
// Skip if no ID
if id.is_empty() {
continue;
}
if let Some(net_info) = get_single_container_network_stats(docker, &id).await? {
net_infos.push(net_info);
}
}
Ok(net_infos)
}
/// Get network statistics for a specific container
pub async fn get_single_container_network_stats(
docker: &Docker,
container_id: &str,
) -> Result<Option<ContainerNetworkInfo>, Box<dyn Error + Send + Sync>> {
let mut stats_stream = docker.stats(
container_id,
Some(StatsOptions {
stream: false,
one_shot: true,
}),
);
if let Some(stats) = stats_stream.try_next().await? {
if let Some(networks) = stats.networks {
// Take the first network interface (usually eth0)
if let Some((_name, net)) = networks.into_iter().next() {
return Ok(Some(ContainerNetworkInfo {
container_id: Some(container_id.to_string()),
rx_bytes: net.rx_bytes,
tx_bytes: net.tx_bytes,
rx_packets: net.rx_packets,
tx_packets: net.tx_packets,
rx_errors: net.rx_errors,
tx_errors: net.tx_errors,
}));
}
}
}
Ok(None)
}
/// Get total network statistics across all containers
pub async fn get_total_network_stats(
docker: &Docker,
) -> Result<(u64, u64), Box<dyn Error + Send + Sync>> {
let net_infos = get_all_containers_network_stats(docker).await?;
let total_rx: u64 = net_infos.iter().map(|net| net.rx_bytes.unwrap()).sum();
let total_tx: u64 = net_infos.iter().map(|net| net.tx_bytes.unwrap()).sum();
Ok((total_rx, total_tx))
}

View File

@@ -0,0 +1,77 @@
use super::ContainerMemoryInfo;
use bollard::query_parameters::{ListContainersOptions, StatsOptions};
use bollard::Docker;
use futures_util::stream::TryStreamExt;
use std::error::Error;
/// Get memory statistics for all containers
pub async fn get_all_containers_memory_stats(
docker: &Docker,
) -> Result<Vec<ContainerMemoryInfo>, Box<dyn Error + Send + Sync>> {
let containers = docker
.list_containers(Some(ListContainersOptions {
all: true,
..Default::default()
}))
.await?;
let mut mem_infos = Vec::new();
for container in containers {
let id = container.id.unwrap_or_default();
// Skip if no ID
if id.is_empty() {
continue;
}
if let Some(mem_info) = get_single_container_memory_stats(docker, &id).await? {
mem_infos.push(mem_info);
}
}
Ok(mem_infos)
}
/// Get memory statistics for a specific container
pub async fn get_single_container_memory_stats(
docker: &Docker,
container_id: &str,
) -> Result<Option<ContainerMemoryInfo>, Box<dyn Error + Send + Sync>> {
let mut stats_stream = docker.stats(
container_id,
Some(StatsOptions {
stream: false,
one_shot: true,
}),
);
if let Some(stats) = stats_stream.try_next().await? {
if let Some(memory_stats) = &stats.memory_stats {
let memory_usage = memory_stats.usage.unwrap_or(0);
let memory_limit = memory_stats.limit.unwrap_or(1); // Avoid division by zero
let memory_usage_percent = if memory_limit > 0 {
(memory_usage as f64 / memory_limit as f64) * 100.0
} else {
0.0
};
return Ok(Some(ContainerMemoryInfo {
container_id: Some(container_id.to_string()),
memory_usage: Some(memory_usage),
memory_limit: Some(memory_limit),
memory_usage_percent: Some(memory_usage_percent),
}));
}
}
Ok(None)
}
/// Get total memory usage across all containers
pub async fn get_total_memory_usage(docker: &Docker) -> Result<u64, Box<dyn Error + Send + Sync>> {
let mem_infos = get_all_containers_memory_stats(docker).await?;
let total_memory: u64 = mem_infos.iter().map(|mem| mem.memory_usage.unwrap()).sum();
Ok(total_memory)
}

View File

@@ -0,0 +1,126 @@
use super::ContainerStatusInfo;
use std::error::Error;
use bollard::Docker;
use bollard::query_parameters::{ListContainersOptions, InspectContainerOptions};
use bollard::models::{ContainerSummaryStateEnum, ContainerStateStatusEnum};
/// Get status information for all containers
pub async fn get_all_containers_status(
docker: &Docker,
) -> Result<Vec<ContainerStatusInfo>, Box<dyn Error + Send + Sync>> {
let containers = docker
.list_containers(Some(ListContainersOptions {
all: true, // Include stopped containers
..Default::default()
}))
.await?;
let mut status_infos = Vec::new();
for container in containers {
let id = container.id.unwrap_or_default();
if id.is_empty() {
continue;
}
// Convert ContainerSummaryStateEnum to String
let status = container.state.map(|state| match state {
ContainerSummaryStateEnum::CREATED => "created".to_string(),
ContainerSummaryStateEnum::RUNNING => "running".to_string(),
ContainerSummaryStateEnum::PAUSED => "paused".to_string(),
ContainerSummaryStateEnum::RESTARTING => "restarting".to_string(),
ContainerSummaryStateEnum::REMOVING => "removing".to_string(),
ContainerSummaryStateEnum::EXITED => "exited".to_string(),
ContainerSummaryStateEnum::DEAD => "dead".to_string(),
_ => "unknown".to_string(),
});
// Convert timestamp from i64 to String
let started_at = container.created.map(|timestamp| timestamp.to_string());
status_infos.push(ContainerStatusInfo {
container_id: Some(id.clone()),
status,
state: container.status,
started_at,
finished_at: None, // Docker API doesn't provide finished_at in list
});
}
Ok(status_infos)
}
/// Get status information for a specific container
pub async fn get_single_container_status(
docker: &Docker,
container_id: &str,
) -> Result<Option<ContainerStatusInfo>, Box<dyn Error + Send + Sync>> {
// First try to get from list (faster)
let containers = docker
.list_containers(Some(ListContainersOptions {
all: true,
..Default::default()
}))
.await?;
if let Some(container) = containers.into_iter().find(|c| {
c.id.as_ref().map(|id| id == container_id).unwrap_or(false)
}) {
// Convert ContainerSummaryStateEnum to String
let status = container.state.map(|state| match state {
ContainerSummaryStateEnum::CREATED => "created".to_string(),
ContainerSummaryStateEnum::RUNNING => "running".to_string(),
ContainerSummaryStateEnum::PAUSED => "paused".to_string(),
ContainerSummaryStateEnum::RESTARTING => "restarting".to_string(),
ContainerSummaryStateEnum::REMOVING => "removing".to_string(),
ContainerSummaryStateEnum::EXITED => "exited".to_string(),
ContainerSummaryStateEnum::DEAD => "dead".to_string(),
_ => "unknown".to_string(),
});
// Convert timestamp from i64 to String
let started_at = container.created.map(|timestamp| timestamp.to_string());
return Ok(Some(ContainerStatusInfo {
container_id: Some(container_id.to_string()),
status,
state: container.status,
started_at,
finished_at: None,
}));
}
// Fallback to inspect for more detailed info
match docker.inspect_container(container_id, None::<InspectContainerOptions>).await {
Ok(container_details) => {
let state = container_details.state.unwrap_or_default();
// Convert ContainerStateStatusEnum to String
let status = state.status.map(|status_enum| match status_enum {
ContainerStateStatusEnum::CREATED => "created".to_string(),
ContainerStateStatusEnum::RUNNING => "running".to_string(),
ContainerStateStatusEnum::PAUSED => "paused".to_string(),
ContainerStateStatusEnum::RESTARTING => "restarting".to_string(),
ContainerStateStatusEnum::REMOVING => "removing".to_string(),
ContainerStateStatusEnum::EXITED => "exited".to_string(),
ContainerStateStatusEnum::DEAD => "dead".to_string(),
_ => "unknown".to_string(),
});
// These are already Option<String> from the Docker API
let started_at = state.clone().started_at;
let finished_at = state.clone().finished_at;
Ok(Some(ContainerStatusInfo {
container_id: Some(container_id.to_string()),
status,
state: Some(format!("{:?}", state)), // Convert state to string
started_at,
finished_at,
}))
}
Err(_) => Ok(None), // Container not found
}
}

View File

@@ -31,7 +31,6 @@ pub mod hardware;
pub mod metrics; pub mod metrics;
pub mod models; pub mod models;
use bollard::Docker;
use std::env; use std::env;
use std::error::Error; use std::error::Error;
use tokio::task::JoinHandle; use tokio::task::JoinHandle;
@@ -93,7 +92,7 @@ async fn main() -> Result<(), Box<dyn Error + Send + Sync>> {
Ok((id, ip)) => { Ok((id, ip)) => {
println!("Registered with server. ID: {}, IP: {}", id, ip); println!("Registered with server. ID: {}, IP: {}", id, ip);
(id, ip) (id, ip)
}, }
Err(e) => { Err(e) => {
eprintln!("Fehler bei der Registrierung am Server: {e}"); eprintln!("Fehler bei der Registrierung am Server: {e}");
return Err(e); return Err(e);
@@ -111,9 +110,23 @@ async fn main() -> Result<(), Box<dyn Error + Send + Sync>> {
}; };
println!("Client Version: {}", client_version); println!("Client Version: {}", client_version);
// Prepare Docker registration DTO
let container_dto = if let Some(ref docker_manager) = docker_manager {
docker_manager.create_registration_dto().await?
} else {
println!("Fallback for failing registration");
models::DockerRegistrationDto {
server_id: 0,
//container_count: 0, --- IGNORE ---
containers: serde_json::to_value(&"")?,
}
};
let _ =
api::broadcast_docker_containers(server_url, server_id, &mut container_dto.clone()).await?;
// Start background tasks // Start background tasks
// Start server listening for commands (only if Docker is available) // Start server listening for commands (only if Docker is available)
let listening_handle = if let Some(docker_manager) = docker_manager { let listening_handle = if let Some(ref docker_manager) = docker_manager {
tokio::spawn({ tokio::spawn({
let docker = docker_manager.docker.clone(); let docker = docker_manager.docker.clone();
let server_url = server_url.to_string(); let server_url = server_url.to_string();
@@ -136,9 +149,16 @@ async fn main() -> Result<(), Box<dyn Error + Send + Sync>> {
let metrics_handle = tokio::spawn({ let metrics_handle = tokio::spawn({
let ip = ip.clone(); let ip = ip.clone();
let server_url = server_url.to_string(); let server_url = server_url.to_string();
let docker_manager = docker_manager.as_ref().cloned().unwrap();
async move { async move {
let mut collector = metrics::Collector::new(server_id, ip); let mut collector = metrics::Collector::new(server_id, ip, docker_manager);
collector.run(&server_url).await if let Err(e) = collector.run(&server_url).await {
eprintln!("Metrics collection error: {}", e);
// Don't panic, just return the error
Err(e)
} else {
Ok(())
}
} }
}); });

View File

@@ -13,10 +13,11 @@ use std::error::Error;
use std::time::Duration; use std::time::Duration;
use crate::api; use crate::api;
use crate::docker::DockerManager;
//use crate::docker::DockerInfo; //use crate::docker::DockerInfo;
use crate::hardware::network::NetworkMonitor; use crate::hardware::network::NetworkMonitor;
use crate::hardware::HardwareInfo; use crate::hardware::HardwareInfo;
use crate::models::MetricDto; use crate::models::{DockerMetricDto, MetricDto};
/// Main orchestrator for hardware and network metric collection and reporting. /// Main orchestrator for hardware and network metric collection and reporting.
/// ///
@@ -27,8 +28,9 @@ use crate::models::MetricDto;
/// - `server_id`: Unique server ID assigned by the backend. /// - `server_id`: Unique server ID assigned by the backend.
/// - `ip_address`: IP address of the agent. /// - `ip_address`: IP address of the agent.
pub struct Collector { pub struct Collector {
docker_manager: DockerManager,
network_monitor: NetworkMonitor, network_monitor: NetworkMonitor,
server_id: i32, server_id: u16,
ip_address: String, ip_address: String,
} }
@@ -41,8 +43,9 @@ impl Collector {
/// ///
/// # Returns /// # Returns
/// A new `Collector` ready to collect and report metrics. /// A new `Collector` ready to collect and report metrics.
pub fn new(server_id: i32, ip_address: String) -> Self { pub fn new(server_id: u16, ip_address: String, docker_manager: DockerManager) -> Self {
Self { Self {
docker_manager,
network_monitor: NetworkMonitor::new(), network_monitor: NetworkMonitor::new(),
server_id, server_id,
ip_address, ip_address,
@@ -72,7 +75,16 @@ impl Collector {
continue; continue;
} }
}; };
let docker_metrics = match self.docker_collect().await {
Ok(metrics) => metrics,
Err(e) => {
eprintln!("Error collecting docker metrics: {}", e);
tokio::time::sleep(Duration::from_secs(10)).await;
continue;
}
};
api::send_metrics(base_url, &metrics).await?; api::send_metrics(base_url, &metrics).await?;
api::send_docker_metrics(base_url, &docker_metrics).await?;
tokio::time::sleep(Duration::from_secs(20)).await; tokio::time::sleep(Duration::from_secs(20)).await;
} }
} }
@@ -112,4 +124,14 @@ impl Collector {
net_tx: hardware.network.tx_rate.unwrap_or_default(), net_tx: hardware.network.tx_rate.unwrap_or_default(),
}) })
} }
/// NOTE: This is a compilation-safe stub. Implement the Docker collection using your
/// DockerManager API and container helpers when available.
pub async fn docker_collect(&self) -> Result<DockerMetricDto, Box<dyn Error + Send + Sync>> {
let metrics = self.docker_manager.collect_metrics().await?;
Ok(DockerMetricDto {
server_id: self.server_id,
containers: metrics.containers,
})
}
} }

View File

@@ -12,6 +12,7 @@
use crate::docker::stats; use crate::docker::stats;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::Value;
/// Registration data sent to the backend server. /// Registration data sent to the backend server.
/// ///
@@ -25,7 +26,7 @@ use serde::{Deserialize, Serialize};
#[derive(Serialize, Debug)] #[derive(Serialize, Debug)]
pub struct RegistrationDto { pub struct RegistrationDto {
#[serde(rename = "id")] #[serde(rename = "id")]
pub server_id: i32, pub server_id: u16,
#[serde(rename = "ipAddress")] #[serde(rename = "ipAddress")]
pub ip_address: String, pub ip_address: String,
#[serde(rename = "cpuType")] #[serde(rename = "cpuType")]
@@ -59,7 +60,7 @@ pub struct RegistrationDto {
#[derive(Serialize, Debug)] #[derive(Serialize, Debug)]
pub struct MetricDto { pub struct MetricDto {
#[serde(rename = "serverId")] #[serde(rename = "serverId")]
pub server_id: i32, pub server_id: u16,
#[serde(rename = "ipAddress")] #[serde(rename = "ipAddress")]
pub ip_address: String, pub ip_address: String,
#[serde(rename = "cpu_Load")] #[serde(rename = "cpu_Load")]
@@ -116,7 +117,7 @@ pub struct DiskInfoDetailed {
/// - `ip_address`: IPv4 or IPv6 address (string) /// - `ip_address`: IPv4 or IPv6 address (string)
#[derive(Deserialize)] #[derive(Deserialize)]
pub struct IdResponse { pub struct IdResponse {
pub id: i32, pub id: u16,
#[serde(rename = "ipAddress")] #[serde(rename = "ipAddress")]
pub ip_address: String, pub ip_address: String,
} }
@@ -159,7 +160,7 @@ pub struct ServerMessage {
// Define your message structure here // Define your message structure here
pub message_type: String, pub message_type: String,
pub data: serde_json::Value, pub data: serde_json::Value,
pub message_id: String, // Add an ID for acknowledgment pub message_id: String,
} }
/// Acknowledgment payload sent to the backend server for command messages. /// Acknowledgment payload sent to the backend server for command messages.
@@ -182,26 +183,91 @@ pub struct Acknowledgment {
/// - `image`: Docker image name (string) /// - `image`: Docker image name (string)
/// - `Name`: Container name (string) /// - `Name`: Container name (string)
/// - `Status`: Container status ("running", "stopped", etc.) /// - `Status`: Container status ("running", "stopped", etc.)
/// - `_net_in`: Network receive rate in **bytes per second (B/s)**
/// - `_net_out`: Network transmit rate in **bytes per second (B/s)**
/// - `_cpu_load`: CPU usage as a percentage (**0.0100.0**)
#[derive(Debug, Serialize, Clone)] #[derive(Debug, Serialize, Clone)]
pub struct DockerContainerRegistrationDto { pub struct DockerRegistrationDto {
pub server_id: u32, /// Unique server identifier (integer)
pub containers: Vec<DockerContainerDto>, #[serde(rename = "Server_id")]
pub server_id: u16,
/// Number of currently running containers
// pub container_count: usize, --- IGNORE ---
/// json stringified array of DockerContainer
///
/// ## Json Example
/// json format: [{"id":"234dsf234","image":"nginx:latest","name":"webserver"},...]
///
/// ## Fields
/// id: unique container ID (first 12 hex digits)
/// image: docker image name
/// name: container name
#[serde(rename = "Containers")]
pub containers: Value, // Vec<DockerContainer>,
} }
#[derive(Debug, Serialize, Clone)] #[derive(Debug, Serialize, Clone)]
pub struct DockerContainerDto { pub struct DockerMetricDto {
pub id: String, pub server_id: u16,
pub image: String, /// json stringified array of DockerContainer
pub name: String, ///
/// ## Json Example
/// json format: [{"id":"234dsf234","status":"running","image":"nginx:latest","name":"webserver","network":{"net_in":1024,"net_out":2048},"cpu":{"cpu_load":12.5},"ram":{"ram_load":10.0}},...]
///
/// ## Fields
/// id: unique container ID (first 12 hex digits)
/// status: "running";"stopped";others
/// image: docker image name
/// name: container name
/// network: network stats
/// cpu: cpu stats
/// ram: ram stats
pub containers: Value, // Vec<DockerContainerInfo>,
} }
#[derive(Debug, Serialize, Clone)] #[derive(Debug, Serialize, Clone)]
pub struct DockerContainerMetricDto {
pub id: String, pub struct DockerCollectMetricDto {
pub status: String, // "running";"stopped";others pub server_id: u16,
pub network: stats::ContainerNetworkInfo, pub status: DockerContainerStatusDto,
pub cpu: stats::ContainerCpuInfo, pub cpu: DockerContainerCpuDto,
pub ram: DockerContainerRamDto,
pub network: DockerContainerNetworkDto,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainerStatusDto {
pub status: Option<String>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainerCpuDto {
pub cpu_load: Option<f64>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainerRamDto {
pub ram_load: Option<f64>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainerNetworkDto {
pub net_in: Option<f64>,
pub net_out: Option<f64>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainerInfo {
pub container: Option<DockerContainer>,
pub status: Option<stats::ContainerStatusInfo>, // "running";"stopped";others
pub network: Option<stats::ContainerNetworkInfo>,
pub cpu: Option<stats::ContainerCpuInfo>,
pub ram: Option<stats::ContainerMemoryInfo>,
}
#[derive(Debug, Serialize, Clone)]
pub struct DockerContainer {
pub id: String,
#[serde(default)]
pub image: Option<String>,
#[serde(default)]
pub name: Option<String>,
} }

View File

@@ -0,0 +1,44 @@
networks:
watcher-network:
driver: bridge
services:
watcher:
image: git.triggermeelmo.com/watcher/watcher-server:v0.1.11
container_name: watcher
deploy:
resources:
limits:
memory: 200M
restart: unless-stopped
env_file: .env
ports:
- "5000:5000"
volumes:
- ./watcher-volumes/data:/app/persistence
- ./watcher-volumes/dumps:/app/wwwroot/downloads/sqlite
- ./watcher-volumes/logs:/app/logs
watcher-agent:
image: git.triggermeelmo.com/donpat1to/watcher-agent:v0.1.28
container_name: watcher-agent
restart: always
privileged: true # Grants full hardware access (use with caution)
env_file: .env
pid: "host"
volumes:
# Mount critical system paths for hardware monitoring
- /sys:/sys:ro # CPU/GPU temps, sensors
- /proc:/proc # Process/CPU stats
- /dev:/dev:ro # Disk/GPU device access
- /var/run/docker.sock:/var/run/docker.sock # Docker API access
- /:/root:ro # Access to for df-command
# Application volumes
- ./config:/app/config:ro
- ./logs:/app/logs
network_mode: host # Uses host network (for correct IP/interface detection)
healthcheck:
test: [ "CMD", "/usr/local/bin/WatcherAgent", "healthcheck" ]
interval: 30s
timeout: 3s
retries: 3

View File

@@ -1,23 +1,20 @@
watcher-agent: networks:
image: git.triggermeelmo.com/donpat1to/watcher-agent:development watcher-network:
container_name: watcher-agent driver: bridge
restart: always
privileged: true # Grants full hardware access (use with caution) services:
watcher:
image: git.triggermeelmo.com/watcher/watcher-server:v0.1.11
container_name: watcher
deploy:
resources:
limits:
memory: 200M
restart: unless-stopped
env_file: .env env_file: .env
pid: "host" ports:
- "5000:5000"
volumes: volumes:
# Mount critical system paths for hardware monitoring - ./watcher-volumes/data:/app/persistence
- /sys:/sys:ro # CPU/GPU temps, sensors - ./watcher-volumes/dumps:/app/wwwroot/downloads/sqlite
- /proc:/proc # Process/CPU stats - ./watcher-volumes/logs:/app/logs
- /dev:/dev:ro # Disk/GPU device access
- /var/run/docker.sock:/var/run/docker.sock # Docker API access
- /:/root:ro # Access to for df-command
# Application volumes
- ./config:/app/config:ro
- ./logs:/app/logs
network_mode: host # Uses host network (for correct IP/interface detection)
healthcheck:
test: ["CMD", "/usr/local/bin/WatcherAgent", "healthcheck"]
interval: 30s
timeout: 3s
retries: 3