armbian-next: the great cli entrypoint (+docker) rewrite; introduce `USE_LOCAL_APT_DEB_CACHE` replacing `apt-cacher-ng`

- armbian-next: introduce `USE_LOCAL_APT_DEB_CACHE` (default `=yes`) as alternative/in addition to `apt-cacher-ng` (eg, in Docker)
  - this uses `cache/aptcache/${RELEASE}-${ARCH}` (in the host) for
      - apt cache, by bind-mounting it to `${SDCARD}/var/cache/apt` in the `chroot_sdcard_apt_get()` runner and its usages
      - debootstrap, by passing it `--cache-dir`
  - utility function to help understand what is happening to cache during usage
  - apt itself mantains this cache, removing old packages when new ones are installed. apt does this _by default_
      - introduce `DONT_MAINTAIN_APT_CACHE=yes` to skip out of automatic apt maintenance of apt cache, eg, during `remove`s
      - don't do `apt clean` and such if using local cache, that would clean the cache, not the chroot
  - clean up `install_deb_chroot()` a little, find an unrelated bug there
- WiP: the great cli entrypoint (+docker) rewrite, Phase 6: relaunching structure; re-pass ARMBIAN_BUILD_UUID; use ARMBIAN_COMMAND for log filename; fix for output/logs dir perms
- WiP: the great cli entrypoint (+docker) rewrite, Phase 5: cleanups 4/x; better logging, check & force `DEST_LANG`
- WiP: the great cli entrypoint (+docker) rewrite, Phase 5: cleanups 3/x; don't write to stderr in generated Dockerfile
  - it's `drastic red` on non-buildx dockers
- WiP: the great cli entrypoint (+docker) rewrite, Phase 5: cleanups 2/x, logging
- WiP: the great cli entrypoint (+docker) rewrite, Phase 5: cleanups 1/x
  - source configs in a logging section.
  - Docker: silent, fast retries to make sure `docker system df` works
  - shut-up `chown` (no `-v`) output related to  `SET_OWNER_TO_UID`
  - ask user to wait while `DESTIMG` is rsync'ed to `FINALDEST` -- it's potentially very slow
  - use green apple for Mac logging, instead of red apple which might imply error...
- WiP: the great cli entrypoint (+docker) rewrite, Phase 4: run as non-root, maybe-with-Docker
  - introduce `is_docker_ready_to_go()`; if it is, and we're not root, use Docker instead of sudo. <- GOOD IDEA? BAD IDEA? lol
  - introduce `SET_OWNER_TO_UID` var to be passed to Docker/sudo so written files are owned by the launching user, not root.
    - introduce `mkdir_recursive_and_set_uid_owner()` and `reset_uid_owner()` to reset owner based on `SET_OWNER_TO_UID`
    - use it for userpatches files created, logs, and output files, including images and debs.
  - @TODOs ref. `$SUDO_USER` which I think the old version of this?
  - add a lot of @TODOs, ref being able to relaunch something that's not `build` inside Docker, also add/change params and configs and command.
    - initially add `ARMBIAN_DOCKER_RELAUNCH_EXTRA_ARGS`
- WiP: the great cli entrypoint (+docker) rewrite, Phase 3: rpardini is demented, v3
- WiP: the great cli entrypoint (+docker) rewrite, Phase 2: rpardini is demented
- WiP: the great cli entrypoint (+docker) rewrite, Phase 1
- armbian-next: WiP: Docker: actually use the GHA-image as base; pull it every 24hs.
  - using image in my private repo.
  - this has significant speedup to "start building time" on the 1st run
  - move some Linux specific stuff to its own if
  - add comments and todo
- armbian-next: WiP: Docker, high-WiP, beginnings of Armbian mount dict, with linux/darwin preferences
- armbian-next: WiP: Docker, configure `BUILDKIT_COLORS`
- armbian-next: WiP: Docker, make docker image from Dockerfile more compact by flattening layers
- armbian-next: `logging`: add whale indicator if build running under Docker
- armbian-next: WiP: `docker`: working with `bookworm`, `sid`, and `jammy` on Darwin & Linux; works with `bullseye` on Linux only
- armbian-next: WiP: `docker`: force ARMBIAN_RUNNING_IN_CONTAINER both in Dockerfile and passed as `--env`; apt update and install in same layer; back to jammy
- armbian-next: introduce `armbian_is_running_in_container()` and `armbian_is_host_running_systemd()`, replacing `systemd-detect-virt` in multiple spots
- WiP: try with debian:bullseye -- can't detect docker at all
- armbian-next: WiP: 2nd stab at new Docker support; Darwin still works; Linux `docker.io` working
  - gen .dockerignore together with Dockerfile
  - split in funcs
  - hacks for Linux and `/dev/loop` stuff, CONTAINER_COMPAT=yes
  - mac still works, Linux stuff would break it but I if'fed
- armbian-next: the secrets of `CONTAINER_COMPAT` revealed; add size checking to check_loop_device() and avoid retry when `mknod`ing
  - this fails for the right reasons now, causing retries, which are then retried and work ;-)
  - this is related to building under Docker on Linux, using docker.io package (not docker-ce)
- armbian-next: remove `.dockerignore` and add it to `.gitignore`; it's going to be auto-generated
- armbian-next: `.dockerignore`: Docker context should only have minimal files and folders, to speed up Dockerfile build
  - IMPORTANT: `.dockerignore` is going to be generated from now on: so this is the last commit with changes before removal
-  armbian-next: WiP: initial stab at new Docker support; really run the passed cmdline; add Dockerfile to gitignore
-  armbian-next: WiP: initial stab at new Docker support; generate Dockerfile; introduce REQUIREMENTS_DEFS_ONLY
  - uses REQUIREMENTS_DEFS_ONLY
  - works on Docker Desktop on Mac;
  - linux TBA
- armbian-next: don't error out if `.git` not present; other small fixes
- armbian-next: general "work or at least don't misbehave when run on a very bare ubuntu:latest instance"
  - can't assume things, for example:
  - that `sudo` will be available; it might not, and might be already root, no reason to fail
  - that `/etc/timezone` will exist
  - that `systemd-detect-virt` will be available
  - that `git` will be available
  - that `locale-gen` will be available
This commit is contained in:
Ricardo Pardini 2022-10-09 17:58:23 +02:00
parent 2c6751f584
commit d24d3327a8
No known key found for this signature in database
GPG Key ID: 3D38CA12A66C5D02
37 changed files with 1476 additions and 504 deletions

View File

@ -1,4 +0,0 @@
### output directories
.tmp/
output/
cache/

4
.gitignore vendored
View File

@ -25,3 +25,7 @@ ubuntu-*-cloudimg-console.log
# Mainly generated by merge tools like 'meld'
*.orig
# Dockerfile and .dockerignore is generated by docker.sh
Dockerfile
.dockerignore

View File

@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
#
# Copyright (c) 2013-2021 Igor Pecovnik, igor.pecovnik@gma**.com
#
@ -7,7 +7,7 @@
# warranty of any kind, whether express or implied.
#
# This file is a part of the Armbian build script
# https://github.com/armbian/build/
# https://github.com/armbian/build/
# DO NOT EDIT THIS FILE
# use configuration files like config-default.conf to set the build configuration

View File

@ -1,151 +0,0 @@
# DO NOT EDIT THIS FILE
#
# This is a Docker launcher file. To set up the configuration, use command line arguments to compile.sh
# or use pass a config file as a parameter ./compile docker [example] BUILD_KERNEL="yes" ...
# Default values for Docker image
CUSTOM_PACKAGES="g++-11-arm-linux-gnueabihf libssl3 qemu"
BASE_IMAGE="ubuntu:jammy"
[[ ! -c /dev/loop-control ]] && display_alert "/dev/loop-control does not exist, image building may not work" "" "wrn"
# second argument can be a build parameter or a config file
# create user accessible directories and set their owner group and permissions
# if they are created from Docker they will be owned by root and require root permissions to change/delete
mkdir -p $SRC/{output,userpatches}
grep -q '^docker:' /etc/group && chgrp --quiet docker $SRC/{output,userpatches}
chmod --quiet g+w,g+s $SRC/{output,userpatches}
VERSION=$(cat $SRC/VERSION)
if grep -q $VERSION <(grep armbian <(docker images)); then
display_alert "Using existed a armbian Docker container"
else
# build a new container based on provided Dockerfile
display_alert "Docker container not found or out of date"
display_alert "Building a Docker container"
if ! docker build --build-arg CUSTOM_PACKAGES="$CUSTOM_PACKAGES" --build-arg BASE_IMAGE=$BASE_IMAGE -t armbian:$VERSION . ; then
STATUS=$?
# Adding a newline, so the alert won't be shown in the same line as the error
echo
display_alert "Docker container build exited with code: " "$STATUS" "err"
exit 1
fi
fi
DOCKER_FLAGS=()
# Running this container in privileged mode is a simple way to solve loop device access issues
# Required for USB FEL or when writing image directly to the block device, when CARD_DEVICE is defined
#DOCKER_FLAGS+=(--privileged)
# add only required capabilities instead (though MKNOD should be already present)
# CAP_SYS_PTRACE is required for systemd-detect-virt in some cases
DOCKER_FLAGS+=(--cap-add=SYS_ADMIN --cap-add=MKNOD --cap-add=SYS_PTRACE)
# mounting things inside the container on Ubuntu won't work without this
# https://github.com/moby/moby/issues/16429#issuecomment-217126586
DOCKER_FLAGS+=(--security-opt=apparmor:unconfined)
# remove resulting container after exit to minimize clutter
# bad side effect - named volumes are considered not attached to anything and are removed on "docker volume prune"
DOCKER_FLAGS+=(--rm)
# pass through loop devices
for d in /dev/loop*; do
DOCKER_FLAGS+=(--device=$d)
done
# accessing dynamically created devices won't work by default
# and --device doesn't accept devices that don't exist at the time "docker run" is executed
# https://github.com/moby/moby/issues/27886
# --device-cgroup-rule requires new Docker version
# Test for --device-cgroup-rule support. If supported, appends it
# Otherwise, let it go and let user know that only kernel and u-boot for you
if docker run --help | grep device-cgroup-rule > /dev/null 2>&1; then
# allow loop devices (not required)
DOCKER_FLAGS+=(--device-cgroup-rule='b 7:* rmw')
# allow loop device partitions
DOCKER_FLAGS+=(--device-cgroup-rule='b 259:* rmw')
# this is an ugly hack, but it is required to get /dev/loopXpY minor number
# for mknod inside the container, and container itself still uses private /dev internally
DOCKER_FLAGS+=(-v /dev:/tmp/dev:ro)
else
display_alert "Your Docker version does not support device-cgroup-rule" "" "wrn"
display_alert "and will be able to create only Kernel and u-boot packages (KERNEL_ONLY=yes)" "" "wrn"
fi
# Expose ports for NFS server inside docker container, required for USB FEL
#DOCKER_FLAGS+=(-p 0.0.0.0:2049:2049 -p 0.0.0.0:2049:2049/udp -p 0.0.0.0:111:111 -p 0.0.0.0:111:111/udp -p 0.0.0.0:32765:32765 -p 0.0.0.0:32765:32765/udp -p 0.0.0.0:32767:32767 -p 0.0.0.0:32767:32767/udp)
# Export usb device for FEL, required for USB FEL
#DOCKER_FLAGS+=(-v /dev/bus/usb:/dev/bus/usb:ro)
# map source to Docker Working dir.
DOCKER_FLAGS+=(-v=$SRC/:/root/armbian/)
# map /tmp to tmpfs
DOCKER_FLAGS+=(--mount type=tmpfs,destination=/tmp)
# mount 2 named volumes - for cacheable data and compiler cache
DOCKER_FLAGS+=(-v=armbian-cache:/root/armbian/cache -v=armbian-ccache:/root/.ccache)
DOCKER_FLAGS+=(-e COLUMNS="`tput cols`" -e LINES="`tput lines`")
# pass other command line arguments like KERNEL_ONLY=yes, KERNEL_CONFIGURE=yes, etc.
# pass "docker-guest" as an additional config name that will be sourced in the container if exists
if [[ $SHELL_ONLY == yes ]]; then
display_alert "Running the container in shell mode" "" "info"
cat <<\EOF
Welcome to the docker shell of Armbian.
To build the whole thing using default profile, run:
./compile.sh
To build the U-Boot only, run:
# Optional: prepare the environment first if you had not run `./compile.sh`
./compile.sh 'prepare_host && compile_sunxi_tools && install_rkbin_tools'
# build the U-Boot only
./compile.sh compile_uboot
If you prefer to use profile, for example, `userpatches/config-my.conf`, try:
./compile.sh my 'prepare_host && compile_sunxi_tools && install_rkbin_tools'
./compile.sh my compile_uboot
EOF
docker run "${DOCKER_FLAGS[@]}" -it --entrypoint /usr/bin/env armbian:$VERSION "$@" /bin/bash
else
display_alert "Running the container" "" "info"
docker run "${DOCKER_FLAGS[@]}" -it armbian:$VERSION "$@"
fi
# Docker error treatment
STATUS=$?
# Adding a newline, so the message won't be shown in the same line as the error
echo
case $STATUS in
0)
# No errors from either Docker or build script
echo
;;
125)
display_alert "Docker command failed, check syntax or version support. Error code: " "$STATUS" "err"
;;
126)
display_alert "Failure when running containerd command. Error code: " "$STATUS" "err"
;;
127)
display_alert "containerd command not found. Error code: " "$STATUS" "err"
;;
137)
display_alert "Container exit from docker stop. Error code: " "$STATUS" "info"
;;
*)
# Build script exited with error, but the error message should have been already printed
echo
;;
esac
# don't need to proceed further on the host
exit $STATUS

View File

@ -11,20 +11,20 @@
KERNEL_ONLY="" # leave empty to select each time, set to "yes" or "no" to skip dialog prompt
KERNEL_CONFIGURE="" # leave empty to select each time, set to "yes" or "no" to skip dialog prompt
CLEAN_LEVEL="debs,oldcache" # comma-separated list of clean targets:
: # "make-atf" = make clean for ATF, if it is built.
: # "make-uboot" = make clean for uboot, if it is built.
: # "make-kernel" = make clean for kernel, if it is built. very slow.
: # *important*: "make" by itself has disabled, since Armbian knows how to handle Make timestamping now.
: # "debs" = delete packages in "./output/debs" for current branch and family. causes rebuilds, hopefully cached.
: # "alldebs" = delete all packages in "./output/debs", "images" = delete "./output/images",
: # "cache" = delete "./output/cache", "sources" = delete "./sources"
: # "oldcache" = remove old cached rootfs except for the newest 8 files
: # --> "make-atf" = make clean for ATF, if it is built.
: # --> "make-uboot" = make clean for uboot, if it is built.
: # --> "make-kernel" = make clean for kernel, if it is built. very slow.
: # --> "debs" = delete packages in "./output/debs" for current branch and family. causes rebuilds, hopefully cached.
: # --> "alldebs" = delete all packages in "./output/debs", "images" = delete "./output/images",
: # --> "cache" = delete "./output/cache", "sources" = delete "./sources"
: # --> "oldcache" = remove old cached rootfs except for the newest 8 files
: # --> *important*: "make" by itself has disabled, since Armbian knows how to handle Make timestamping now.
REPOSITORY_INSTALL="" # comma-separated list of core modules which will be installed from repository
REPOSITORY_INSTALL="" # comma-separated list of core packages which will be installed from repository instead of built
# "u-boot", "kernel", "bsp", "armbian-config", "armbian-firmware"
# leave empty to build from sources or use local cache
DEST_LANG="en_US.UTF-8" # sl_SI.UTF-8, en_US.UTF-8
# DEST_LANG="en_US.UTF-8" # Example: "sl_SI.UTF-8" Default: "en_US.UTF-8"
# advanced
EXTERNAL_NEW="prebuilt" # compile and install or install prebuilt additional packages

View File

@ -27,7 +27,7 @@ call_extension_method() {
# Then a sanity check, hook points should only be invoked after the manager has initialized.
if [[ ${initialize_extension_manager_counter} -lt 1 ]]; then
display_alert "Extension problem" "Call to call_extension_method() (in ${BASH_SOURCE[1]- $(get_extension_hook_stracktrace "${BASH_SOURCE[*]}" "${BASH_LINENO[*]}")}) before extension manager is initialized." "err"
display_alert "Extension problem" "Call to call_extension_method() ($*: in ${BASH_SOURCE[1]- $(get_extension_hook_stracktrace "${BASH_SOURCE[*]}" "${BASH_LINENO[*]}")}) before extension manager is initialized." "err"
fi
# With DEBUG_EXTENSION_CALLS, log the hook call. Users might be wondering what/when is a good hook point to use, and this is visual aid.

View File

@ -0,0 +1,49 @@
function cli_standard_build_pre_run() {
declare -g ARMBIAN_COMMAND_REQUIRE_BASIC_DEPS="yes" # Require prepare_host_basic to run before the command.
# Super early handling. If no command and not root, become root by using sudo. Some exceptions apply.
if [[ "${EUID}" == "0" ]]; then # we're already root. Either running as real root, or already sudo'ed.
display_alert "Already running as root" "great" "debug"
else # not root.
# Pass the current UID to any further relaunchings (under docker or sudo).
ARMBIAN_CLI_RELAUNCH_PARAMS+=(["SET_OWNER_TO_UID"]="${EUID}") # add params when relaunched under docker
# We've a few options.
# 1) We could check if Docker is working, and do everything under Docker. Users who can use Docker, can "become" root inside a container.
# 2) We could ask for sudo (which _might_ require a password)...
# @TODO: GitHub actions can do both. Sudo without password _and_ Docker; should we prefer Docker? Might have unintended consequences...
if is_docker_ready_to_go; then
# add the current user EUID as a parameter when it's relaunched under docker. SET_OWNER_TO_UID="${EUID}"
display_alert "Trying to build, not root, but Docker is ready to go" "delegating to Docker" "debug"
ARMBIAN_CHANGE_COMMAND_TO="docker"
return 0
fi
# check if we're on Linux via uname. if not, refuse to do anything.
if [[ "$(uname)" != "Linux" ]]; then
display_alert "Not running on Linux; Docker is not available" "refusing to run" "err"
exit 1
fi
display_alert "This script requires root privileges; Docker is unavailable" "trying to use sudo" "wrn"
declare -g ARMBIAN_CLI_RELAUNCH_ARGS=()
produce_relaunch_parameters # produces ARMBIAN_CLI_RELAUNCH_ARGS
sudo --preserve-env "${SRC}/compile.sh" "${ARMBIAN_CLI_RELAUNCH_ARGS[@]}" # MARK: relaunch done here!
display_alert "AFTER SUDO!!!" "AFTER SUDO!!!" "warn"
fi
}
function cli_standard_build_run() {
# @TODO: then many other interesting possibilities like a REPL, which we lost somewhere along the way. docker-shell?
# configuration etc - it initializes the extension manager
prepare_and_config_main_build_single
# Allow for custom user-invoked functions, or do the default build.
if [[ -z $1 ]]; then
main_default_build_single
else
# @TODO: rpardini: check this with extensions usage?
eval "$@"
fi
}

View File

@ -0,0 +1,9 @@
function cli_config_dump_pre_run() {
declare -g CONFIG_DEFS_ONLY='yes'
}
function cli_config_dump_run() {
# configuration etc - it initializes the extension manager
do_capturing_defs prepare_and_config_main_build_single # this sets CAPTURED_VARS
echo "${CAPTURED_VARS}" # to stdout!
}

View File

@ -0,0 +1,37 @@
function cli_docker_pre_run() {
if [[ "${DOCKERFILE_GENERATE_ONLY}" == "yes" ]]; then
display_alert "Dockerfile generation only" "func cli_docker_pre_run" "debug"
return 0
fi
# make sure we're not _ALREADY_ running under docker... otherwise eternal loop?
if [[ "${ARMBIAN_RUNNING_IN_CONTAINER}" == "yes" ]]; then
display_alert "wtf" "asking for docker... inside docker; turning to build command" "warn"
# @TODO: wrong, what if we wanna run other stuff inside Docker? not build?
ARMBIAN_CHANGE_COMMAND_TO="build"
fi
}
function cli_docker_run() {
LOG_SECTION="docker_cli_prepare" do_with_logging docker_cli_prepare
if [[ "${DOCKERFILE_GENERATE_ONLY}" == "yes" ]]; then
display_alert "Dockerfile generated" "exiting" "info"
exit 0
fi
# Force showing logs here while bulding Dockerfile.
SHOW_LOG=yes LOG_SECTION="docker_cli_build_dockerfile" do_with_logging docker_cli_build_dockerfile
LOG_SECTION="docker_cli_prepare_launch" do_with_logging docker_cli_prepare_launch
ARMBIAN_CLI_RELAUNCH_PARAMS+=(["SET_OWNER_TO_UID"]="${EUID}") # fix the owner of files to our UID
ARMBIAN_CLI_RELAUNCH_PARAMS+=(["ARMBIAN_BUILD_UUID"]="${ARMBIAN_BUILD_UUID}") # pass down our uuid to the docker instance
ARMBIAN_CLI_RELAUNCH_PARAMS+=(["SKIP_LOG_ARCHIVE"]="yes") # launched docker instance will not cleanup logs.
declare -g SKIP_LOG_ARCHIVE=yes # Don't archive logs in the parent instance either.
declare -g ARMBIAN_CLI_RELAUNCH_ARGS=()
produce_relaunch_parameters # produces ARMBIAN_CLI_RELAUNCH_ARGS
docker_cli_launch "${ARMBIAN_CLI_RELAUNCH_ARGS[@]}" # MARK: this "re-launches" using the passed params.
}

View File

@ -1,152 +0,0 @@
#!/usr/bin/env bash
function cli_entrypoint() {
# array, readonly, global, for future reference, "exported" to shutup shellcheck
declare -rg -x -a ARMBIAN_ORIGINAL_ARGV=("${@}")
if [[ "${ARMBIAN_ENABLE_CALL_TRACING}" == "yes" ]]; then
set -T # inherit return/debug traps
mkdir -p "${SRC}"/output/call-traces
echo -n "" > "${SRC}"/output/call-traces/calls.txt
trap 'echo "${BASH_LINENO[@]}|${BASH_SOURCE[@]}|${FUNCNAME[@]}" >> ${SRC}/output/call-traces/calls.txt ;' RETURN
fi
if [[ "${EUID}" == "0" ]] || [[ "${1}" == "vagrant" ]]; then
:
elif [[ "${1}" == docker || "${1}" == dockerpurge || "${1}" == docker-shell ]] && grep -q "$(whoami)" <(getent group docker); then
:
elif [[ "${CONFIG_DEFS_ONLY}" == "yes" ]]; then # this var is set in the ENVIRONMENT, not as parameter.
display_alert "No sudo for" "env CONFIG_DEFS_ONLY=yes" "debug" # not really building in this case, just gathering meta-data.
else
display_alert "This script requires root privileges, trying to use sudo" "" "wrn"
sudo "${SRC}/compile.sh" "$@"
fi
# Purge Armbian Docker images
if [[ "${1}" == dockerpurge && -f /etc/debian_version ]]; then
display_alert "Purging Armbian Docker containers" "" "wrn"
docker container ls -a | grep armbian | awk '{print $1}' | xargs docker container rm &> /dev/null
docker image ls | grep armbian | awk '{print $3}' | xargs docker image rm &> /dev/null
shift
set -- "docker" "$@"
fi
# Docker shell
if [[ "${1}" == docker-shell ]]; then
shift
SHELL_ONLY=yes
set -- "docker" "$@"
fi
handle_docker_vagrant "$@"
prepare_userpatches "$@"
if [[ -z "${CONFIG}" && -n "$1" && -f "${SRC}/userpatches/config-$1.conf" ]]; then
CONFIG="userpatches/config-$1.conf"
shift
fi
# using default if custom not found
if [[ -z "${CONFIG}" && -f "${SRC}/userpatches/config-default.conf" ]]; then
CONFIG="userpatches/config-default.conf"
fi
# source build configuration file
CONFIG_FILE="$(realpath "${CONFIG}")"
if [[ ! -f "${CONFIG_FILE}" ]]; then
display_alert "Config file does not exist" "${CONFIG}" "error"
exit 254
fi
CONFIG_PATH=$(dirname "${CONFIG_FILE}")
# DEST is the main output dir.
declare DEST="${SRC}/output"
if [ -d "$CONFIG_PATH/output" ]; then
DEST="${CONFIG_PATH}/output"
fi
display_alert "Output directory DEST:" "${DEST}" "debug"
# set unique mounting directory for this build.
# basic deps, which include "uuidgen", will be installed _after_ this, so we gotta tolerate it not being there yet.
declare -g ARMBIAN_BUILD_UUID
if [[ -f /usr/bin/uuidgen ]]; then
ARMBIAN_BUILD_UUID="$(uuidgen)"
else
display_alert "uuidgen not found" "uuidgen not installed yet" "info"
ARMBIAN_BUILD_UUID="no-uuidgen-yet-${RANDOM}-$((1 + $RANDOM % 10))$((1 + $RANDOM % 10))$((1 + $RANDOM % 10))$((1 + $RANDOM % 10))"
fi
display_alert "Build UUID:" "${ARMBIAN_BUILD_UUID}" "debug"
# Super-global variables, used everywhere. The directories are NOT _created_ here, since this very early stage.
export WORKDIR="${SRC}/.tmp/work-${ARMBIAN_BUILD_UUID}" # WORKDIR at this stage. It will become TMPDIR later. It has special significance to `mktemp` and others!
export SDCARD="${SRC}/.tmp/rootfs-${ARMBIAN_BUILD_UUID}" # SDCARD (which is NOT an sdcard, but will be, maybe, one day) is where we work the rootfs before final imaging. "rootfs" stage.
export MOUNT="${SRC}/.tmp/mount-${ARMBIAN_BUILD_UUID}" # MOUNT ("mounted on the loop") is the mounted root on final image (via loop). "image" stage
export EXTENSION_MANAGER_TMP_DIR="${SRC}/.tmp/extensions-${ARMBIAN_BUILD_UUID}" # EXTENSION_MANAGER_TMP_DIR used to store extension-composed functions
export DESTIMG="${SRC}/.tmp/image-${ARMBIAN_BUILD_UUID}" # DESTIMG is where the backing image (raw, huge, sparse file) is kept (not the final destination)
export LOGDIR="${SRC}/.tmp/logs-${ARMBIAN_BUILD_UUID}" # Will be initialized very soon, literally, below.
LOG_SECTION=entrypoint start_logging_section # This creates LOGDIR.
add_cleanup_handler trap_handler_cleanup_logging # cleanup handler for logs; it rolls it up from LOGDIR into DEST/logs
if [ "${OFFLINE_WORK}" == "yes" ]; then
display_alert "* " "You are working offline!"
display_alert "* " "Sources, time and host will not be checked"
else
# check and install the basic utilities.
LOG_SECTION="prepare_host_basic" do_with_logging prepare_host_basic
fi
# Source the extensions manager library at this point, before sourcing the config.
# This allows early calls to enable_extension(), but initialization proper is done later.
# shellcheck source=lib/extensions.sh
source "${SRC}"/lib/extensions.sh
display_alert "Using config file" "${CONFIG_FILE}" "info"
pushd "${CONFIG_PATH}" > /dev/null || exit
# shellcheck source=/dev/null
source "${CONFIG_FILE}"
popd > /dev/null || exit
[[ -z "${USERPATCHES_PATH}" ]] && USERPATCHES_PATH="${CONFIG_PATH}"
# Script parameters handling
while [[ "${1}" == *=* ]]; do
parameter=${1%%=*}
value=${1##*=}
shift
display_alert "Command line: setting $parameter to" "${value:-(empty)}" "info"
eval "$parameter=\"$value\""
done
##
## Main entrypoint.
##
# reset completely after sourcing config file
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
# configuration etc - it initializes the extension manager.
do_capturing_defs prepare_and_config_main_build_single # this sets CAPTURED_VARS
if [[ "${CONFIG_DEFS_ONLY}" == "yes" ]]; then
echo "${CAPTURED_VARS}" # to stdout!
else
unset CAPTURED_VARS
# Allow for custom user-invoked functions, or do the default build.
if [[ -z $1 ]]; then
main_default_build_single
else
# @TODO: rpardini: check this with extensions usage?
eval "$@"
fi
fi
# Build done, run the cleanup handlers explicitly.
# This zeroes out the list of cleanups, so it's not done again when the main script exits normally and trap = 0 runs.
run_cleanup_handlers
}

View File

@ -0,0 +1,24 @@
function cli_requirements_pre_run() {
declare -g ARMBIAN_COMMAND_REQUIRE_BASIC_DEPS="yes" # Require prepare_host_basic to run before the command.
if [[ "$(uname)" != "Linux" ]]; then
display_alert "Not running on Linux" "refusing to run 'requirements'" "err"
exit 1
fi
if [[ "${EUID}" == "0" ]]; then # we're already root. Either running as real root, or already sudo'ed.
display_alert "Already running as root" "great" "debug"
else
# Fail, installing requirements is not allowed as non-root.
exit_with_error "This command requires root privileges - refusing to run"
fi
}
function cli_requirements_run() {
declare -g REQUIREMENTS_DEFS_ONLY='yes' # @TODO: decide, this is already set in ARMBIAN_COMMANDS_TO_VARS_DICT
declare -a -g host_dependencies=()
early_prepare_host_dependencies # tests itself for REQUIREMENTS_DEFS_ONLY=yes too
install_host_dependencies "for REQUIREMENTS_DEFS_ONLY=yes"
display_alert "Done with" "REQUIREMENTS_DEFS_ONLY" "cachehit"
}

View File

@ -0,0 +1,16 @@
function cli_undecided_pre_run() {
# If undecided, run the 'build' command.
# 'build' will then defer to 'docker' if ran on Darwin.
# so save a trip, check if we're on Darwin right here.
if [[ "$(uname)" == "Linux" ]]; then
display_alert "Linux!" "func cli_undecided_pre_run go to build" "debug"
ARMBIAN_CHANGE_COMMAND_TO="build"
else
display_alert "Not under Linux; use docker..." "func cli_undecided_pre_run go to docker" "debug"
ARMBIAN_CHANGE_COMMAND_TO="docker"
fi
}
function cli_undecided_run() {
exit_with_error "Should never run the undecided command. How did this happen?"
}

View File

@ -0,0 +1,7 @@
function cli_vagrant_pre_run() {
:
}
function cli_vagrant_run() {
:
}

View File

@ -0,0 +1,49 @@
function armbian_register_commands() {
# More than one command can map to the same handler. In that case, use ARMBIAN_COMMANDS_TO_VARS_DICT for specific vars.
declare -g -A ARMBIAN_COMMANDS_TO_HANDLERS_DICT=(
["docker"]="docker" # thus requires cli_docker_pre_run and cli_docker_run
["docker-purge"]="docker" # idem
["dockerpurge"]="docker" # idem
["docker-shell"]="docker" # idem
["dockershell"]="docker" # idem
["generate-dockerfile"]="docker" # idem
["vagrant"]="vagrant" # thus requires cli_vagrant_pre_run and cli_vagrant_run
["requirements"]="requirements" # implemented in cli_requirements_pre_run and cli_requirements_run # @TODO
["config-dump"]="config_dump" # implemented in cli_config_dump_pre_run and cli_config_dump_run # @TODO
["configdump"]="config_dump" # idem
["build"]="standard_build" # implemented in cli_standard_build_pre_run and cli_standard_build_run
["undecided"]="undecided" # implemented in cli_undecided_pre_run and cli_undecided_run - relaunches either build or docker
)
# Vars to be set for each command. Optional.
declare -g -A ARMBIAN_COMMANDS_TO_VARS_DICT=(
["docker-purge"]="DOCKER_SUBCMD='purge'"
["dockerpurge"]="DOCKER_SUBCMD='purge'"
["docker-shell"]="DOCKER_SUBCMD='shell'"
["dockershell"]="DOCKER_SUBCMD='shell'"
["generate-dockerfile"]="DOCKERFILE_GENERATE_ONLY='yes'"
["requirements"]="REQUIREMENTS_DEFS_ONLY='yes'"
["config-dump"]="CONFIG_DEFS_ONLY='yes'"
["configdump"]="CONFIG_DEFS_ONLY='yes'"
)
# Override the LOG_CLI_ID to change the log file name.
# Will be set to ARMBIAN_COMMAND if not set after all pre-runs done.
declare -g ARMBIAN_LOG_CLI_ID
# Keep a running dict of params/variables. Can't repeat stuff here. Dict.
declare -g -A ARMBIAN_CLI_RELAUNCH_PARAMS=(["ARMBIAN_RELAUNCHED"]="yes")
# Keep a running array of config files needed for relaunch.
declare -g -a ARMBIAN_CLI_RELAUNCH_CONFIGS=()
}

View File

@ -0,0 +1,158 @@
function cli_entrypoint() {
# array, readonly, global, for future reference, "exported" to shutup shellcheck
declare -rg -x -a ARMBIAN_ORIGINAL_ARGV=("${@}")
if [[ "${ARMBIAN_ENABLE_CALL_TRACING}" == "yes" ]]; then
set -T # inherit return/debug traps
mkdir -p "${SRC}"/output/call-traces
echo -n "" > "${SRC}"/output/call-traces/calls.txt
trap 'echo "${BASH_LINENO[@]}|${BASH_SOURCE[@]}|${FUNCNAME[@]}" >> ${SRC}/output/call-traces/calls.txt ;' RETURN
fi
# @TODO: allow for a super-early userpatches/config-000.custom.conf.sh to be loaded, before anything else.
# This would allow for custom commands and interceptors.
# Decide what we're gonna do. We've a few hardcoded, 1st-argument "commands".
declare -g -A ARMBIAN_COMMANDS_TO_HANDLERS_DICT ARMBIAN_COMMANDS_TO_VARS_DICT
armbian_register_commands # this defines the above two dictionaries
# Process the command line, separating params (XX=YY) from non-params arguments.
# That way they can be set in any order.
declare -A -g ARMBIAN_PARSED_CMDLINE_PARAMS=() # A dict of PARAM=VALUE
declare -a -g ARMBIAN_NON_PARAM_ARGS=() # An array of all non-param arguments
parse_cmdline_params "${@}" # which fills the above vars.
# Now load the key=value pairs from cmdline into environment, before loading config or executing commands.
# This will be done _again_ later, to make sure cmdline params override config et al.
apply_cmdline_params_to_env "early" # which uses ARMBIAN_PARSED_CMDLINE_PARAMS
# From here on, no more ${1} or stuff. We've parsed it all into ARMBIAN_PARSED_CMDLINE_PARAMS or ARMBIAN_NON_PARAM_ARGS and ARMBIAN_COMMAND.
declare -a -g ARMBIAN_CONFIG_FILES=() # fully validated, complete paths to config files.
declare -g ARMBIAN_COMMAND_HANDLER="" ARMBIAN_COMMAND="" ARMBIAN_COMMAND_VARS="" # only valid command and handler will ever be set here.
declare -g ARMBIAN_HAS_UNKNOWN_ARG="no" # if any unknown params, bomb.
for argument in "${ARMBIAN_NON_PARAM_ARGS[@]}"; do # loop over all non-param arguments, find commands and configs.
parse_each_cmdline_arg_as_command_param_or_config "${argument}" # sets all the vars above
done
# More sanity checks.
# If unknowns, bail.
if [[ "${ARMBIAN_HAS_UNKNOWN_ARG}" == "yes" ]]; then
exit_with_error "Unknown arguments found. Please check the output above and fix them."
fi
# @TODO: Have a config that is always included? "${SRC}/userpatches/config-default.conf" ?
# If we don't have a command decided yet, use the undecided command.
if [[ "${ARMBIAN_COMMAND}" == "" ]]; then
display_alert "No command found, using default" "undecided" "debug"
ARMBIAN_COMMAND="undecided"
fi
# If we don't have a command at this stage, we should default either to 'build' or 'docker', depending on OS.
# Give the chosen command a chance to refuse running, or, even, change the final command to run.
# This allows for example the 'build' command to auto-launch under docker, even without specifying it.
# Also allows for launchers to keep themselves when re-launched, yet do something diferent. (eg: docker under docker does build).
# Or: build under Darwin does docker...
# each _pre_run can change the command and vars to run too, so do it in a loop until it stops changing.
declare -g ARMBIAN_CHANGE_COMMAND_TO="${ARMBIAN_COMMAND}"
while [[ "${ARMBIAN_CHANGE_COMMAND_TO}" != "" ]]; do
display_alert "Still a command to pre-run, this time:" "${ARMBIAN_CHANGE_COMMAND_TO}" "debug"
ARMBIAN_COMMAND="${ARMBIAN_CHANGE_COMMAND_TO}"
armbian_prepare_cli_command_to_run "${ARMBIAN_COMMAND}"
ARMBIAN_CHANGE_COMMAND_TO=""
armbian_cli_pre_run_command
done
# IMPORTANT!!!: it is INVALID to relaunch compile.sh from here. It will cause logging mistakes.
# So the last possible moment to relaunch is in xxxxx_pre_run!
# Also form here, UUID will be generated, output created, logging enabled, etc.
# Init basic dirs.
declare -g DEST="${SRC}/output" USERPATCHES_PATH="${SRC}"/userpatches # DEST is the main output dir, and USERPATCHES_PATH is the userpatches dir.
mkdir -p "${DEST}" "${USERPATCHES_PATH}" # Create output and userpatches directory if not already there
display_alert "Output directory created! DEST:" "${DEST}" "debug"
# set unique mounting directory for this execution.
# basic deps, which include "uuidgen", will be installed _after_ this, so we gotta tolerate it not being there yet.
declare -g ARMBIAN_BUILD_UUID
if [[ "${ARMBIAN_BUILD_UUID}" != "" ]]; then
display_alert "Using passed-in ARMBIAN_BUILD_UUID" "${ARMBIAN_BUILD_UUID}" "debug"
else
if [[ -f /usr/bin/uuidgen ]]; then
ARMBIAN_BUILD_UUID="$(uuidgen)"
else
display_alert "uuidgen not found" "uuidgen not installed yet" "info"
ARMBIAN_BUILD_UUID="no-uuidgen-yet-${RANDOM}-$((1 + $RANDOM % 10))$((1 + $RANDOM % 10))$((1 + $RANDOM % 10))$((1 + $RANDOM % 10))"
fi
ARMBIAN_BUILD_UUID="$(uuidgen)"
display_alert "Generated ARMBIAN_BUILD_UUID" "${ARMBIAN_BUILD_UUID}" "debug"
fi
display_alert "Build UUID:" "${ARMBIAN_BUILD_UUID}" "debug"
# Super-global variables, used everywhere. The directories are NOT _created_ here, since this very early stage.
export WORKDIR="${SRC}/.tmp/work-${ARMBIAN_BUILD_UUID}" # WORKDIR at this stage. It will become TMPDIR later. It has special significance to `mktemp` and others!
export LOGDIR="${SRC}/.tmp/logs-${ARMBIAN_BUILD_UUID}" # Will be initialized very soon, literally, below.
# @TODO: These are used by actual build, move to its cli handler.
export SDCARD="${SRC}/.tmp/rootfs-${ARMBIAN_BUILD_UUID}" # SDCARD (which is NOT an sdcard, but will be, maybe, one day) is where we work the rootfs before final imaging. "rootfs" stage.
export MOUNT="${SRC}/.tmp/mount-${ARMBIAN_BUILD_UUID}" # MOUNT ("mounted on the loop") is the mounted root on final image (via loop). "image" stage
export EXTENSION_MANAGER_TMP_DIR="${SRC}/.tmp/extensions-${ARMBIAN_BUILD_UUID}" # EXTENSION_MANAGER_TMP_DIR used to store extension-composed functions
export DESTIMG="${SRC}/.tmp/image-${ARMBIAN_BUILD_UUID}" # DESTIMG is where the backing image (raw, huge, sparse file) is kept (not the final destination)
# Make sure ARMBIAN_LOG_CLI_ID is set, and unique.
# Pre-runs might change it, but if not set, default to ARMBIAN_COMMAND.
declare -g ARMBIAN_LOG_CLI_ID="${ARMBIAN_LOG_CLI_ID:-${ARMBIAN_COMMAND}}"
LOG_SECTION="entrypoint" start_logging_section # This creates LOGDIR. @TODO: also maybe causes a spurious group to be created in the log file
add_cleanup_handler trap_handler_cleanup_logging # cleanup handler for logs; it rolls it up from LOGDIR into DEST/logs @TODO: use the COMMAND in the filenames.
# @TODO: So gigantic contention point here about logging the basic deps installation.
if [[ "${ARMBIAN_COMMAND_REQUIRE_BASIC_DEPS}" == "yes" ]]; then
if [[ "${OFFLINE_WORK}" == "yes" ]]; then
display_alert "* " "You are working offline!"
display_alert "* " "Sources, time and host will not be checked"
else
# check and install the basic utilities;
LOG_SECTION="prepare_host_basic" do_with_logging prepare_host_basic # This includes the 'docker' case.
fi
fi
# Source the extensions manager library at this point, before sourcing the config.
# This allows early calls to enable_extension(), but initialization proper is done later.
# shellcheck source=lib/extensions.sh
source "${SRC}"/lib/extensions.sh
# Loop over the ARMBIAN_CONFIG_FILES array and source each. The order is important.
for config_file in "${ARMBIAN_CONFIG_FILES[@]}"; do
local config_filename="${config_file##*/}" config_dir="${config_file%/*}"
display_alert "Sourcing config file" "${config_filename}" "debug"
# use pushd/popd to change directory to the config file's directory, so that relative paths in the config file work.
pushd "${config_dir}" > /dev/null || exit_with_error "Failed to pushd to ${config_dir}"
# shellcheck source=/dev/null
LOG_SECTION="userpatches_config:${config_filename}" do_with_logging source "${config_file}"
# reset completely after sourcing config file
set -e
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
popd > /dev/null || exit_with_error "Failed to popd from ${config_dir}"
# Apply the params received from the command line _again_ after running the config.
# This ensures that params take precedence over stuff possibly defined in the config.
apply_cmdline_params_to_env "after config '${config_filename}'" # which uses ARMBIAN_PARSED_CMDLINE_PARAMS
done
display_alert "Executing final CLI command" "${ARMBIAN_COMMAND}" "debug"
armbian_cli_run_command
display_alert "Done Executing final CLI command" "${ARMBIAN_COMMAND}" "debug"
# Build done, run the cleanup handlers explicitly.
# This zeroes out the list of cleanups, so it"s not done again when the main script exits normally and trap = 0 runs.
run_cleanup_handlers
}

View File

@ -1,87 +1,183 @@
#!/usr/bin/env bash
# Misc functions from compile.sh
function handle_docker_vagrant() {
# Check for Vagrant
if [[ "${1}" == vagrant && -z "$(command -v vagrant)" ]]; then
display_alert "Vagrant not installed." "Installing"
sudo apt-get update
sudo apt-get install -y vagrant virtualbox
fi
# Install Docker if not there but wanted. We cover only Debian based distro install. On other distros, manual Docker install is needed
if [[ "${1}" == docker && -f /etc/debian_version && -z "$(command -v docker)" ]]; then
DOCKER_BINARY="docker-ce"
# add exception for Ubuntu Focal until Docker provides dedicated binary
codename=$(cat /etc/os-release | grep VERSION_CODENAME | cut -d"=" -f2)
codeid=$(cat /etc/os-release | grep ^NAME | cut -d"=" -f2 | awk '{print tolower($0)}' | tr -d '"' | awk '{print $1}')
[[ "${codename}" == "debbie" ]] && codename="buster" && codeid="debian"
[[ "${codename}" == "ulyana" || "${codename}" == "jammy" || "${codename}" == "kinetic" || "${codename}" == "lunar" ]] && codename="focal" && codeid="ubuntu"
# different binaries for some. TBD. Need to check for all others
[[ "${codename}" =~ focal|hirsute ]] && DOCKER_BINARY="docker containerd docker.io"
display_alert "Docker not installed." "Installing" "Info"
sudo bash -c "echo \"deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/${codeid} ${codename} stable\" > /etc/apt/sources.list.d/docker.list"
sudo bash -c "curl -fsSL \"https://download.docker.com/linux/${codeid}/gpg\" | apt-key add -qq - > /dev/null 2>&1 "
export DEBIAN_FRONTEND=noninteractive
sudo apt-get update
sudo apt-get install -y -qq --no-install-recommends ${DOCKER_BINARY}
display_alert "Add yourself to docker group to avoid root privileges" "" "wrn"
"${SRC}/compile.sh" "$@"
exit $?
fi
# This is called like this:
# declare -A -g ARMBIAN_PARSED_CMDLINE_PARAMS=()
# declare -a -g ARMBIAN_NON_PARAM_ARGS=()
# parse_cmdline_params "${@}" # which fills the vars above, being global.
function parse_cmdline_params() {
declare -A -g ARMBIAN_PARSED_CMDLINE_PARAMS=()
declare -a -g ARMBIAN_NON_PARAM_ARGS=()
# loop over the arguments parse them out
local arg
for arg in "${@}"; do
if [[ "${arg}" == *=* ]]; then # contains an equal sign. it's a param.
local param_name param_value param_value_desc
param_name=${arg%%=*}
param_value=${arg##*=}
param_value_desc="${param_value:-(empty)}"
ARMBIAN_PARSED_CMDLINE_PARAMS["${param_name}"]="${param_value}" # For current run.
ARMBIAN_CLI_RELAUNCH_PARAMS["${param_name}"]="${param_value}" # For relaunch.
display_alert "Command line: parsed parameter '$param_name' to" "${param_value_desc}" "debug"
elif [[ "x${arg}x" != "xx" ]]; then # not a param, not empty, store it in the non-param array for later usage
local non_param_value="${arg}"
local non_param_value_desc="${non_param_value:-(empty)}"
display_alert "Command line: storing non-param argument" "${non_param_value_desc}" "debug"
ARMBIAN_NON_PARAM_ARGS+=("${non_param_value}")
fi
done
}
function prepare_userpatches() {
# Create userpatches directory if not exists
mkdir -p "${SRC}"/userpatches
# This can be called early on, or later after having sourced the config. Show what is happening.
# This is called:
# apply_cmdline_params_to_env "reason" # reads from global ARMBIAN_PARSED_CMDLINE_PARAMS
function apply_cmdline_params_to_env() {
declare -A -g ARMBIAN_PARSED_CMDLINE_PARAMS # Hopefully this has values
declare __my_reason="${1}"
shift
# Create example configs if none found in userpatches
if ! ls "${SRC}"/userpatches/{config-default.conf,config-docker.conf,config-vagrant.conf} 1> /dev/null 2>&1; then
# Loop over the dictionary and apply the values to the environment.
for param_name in "${!ARMBIAN_PARSED_CMDLINE_PARAMS[@]}"; do
local param_value param_value_desc current_env_value
# get the current value from the environment
current_env_value="${!param_name}"
current_env_value_desc="${current_env_value:-(empty)}"
# get the new value from the dictionary
param_value="${ARMBIAN_PARSED_CMDLINE_PARAMS[${param_name}]}"
param_value_desc="${param_value:-(empty)}"
# Migrate old configs
if ls "${SRC}"/*.conf 1> /dev/null 2>&1; then
display_alert "Migrate config files to userpatches directory" "all *.conf" "info"
cp "${SRC}"/*.conf "${SRC}"/userpatches || exit 1
rm "${SRC}"/*.conf
[[ ! -L "${SRC}"/userpatches/config-example.conf ]] && ln -fs config-example.conf "${SRC}"/userpatches/config-default.conf || exit 1
# Compare, log, and apply.
if [[ "${current_env_value}" != "${param_value}" ]]; then
display_alert "Applying cmdline param" "'$param_name': '${current_env_value_desc}' --> '${param_value_desc}' ${__my_reason}" "cmdline"
# use `declare -g` to make it global, we're in a function.
eval "declare -g $param_name=\"$param_value\""
else
# rpardini: strategic amount of spacing in log files show the kinda neuroticism that drives me.
display_alert "Skip cmdline param" "'$param_name': already set to '${param_value_desc}' ${__my_reason}" "info"
fi
done
}
display_alert "Create example config file using template" "config-default.conf" "info"
function armbian_prepare_cli_command_to_run() {
local command_id="${1}"
display_alert "Preparing to run command" "${command_id}" "debug"
ARMBIAN_COMMAND="${command_id}"
ARMBIAN_COMMAND_HANDLER="${ARMBIAN_COMMANDS_TO_HANDLERS_DICT[${command_id}]}"
ARMBIAN_COMMAND_VARS="${ARMBIAN_COMMANDS_TO_VARS_DICT[${command_id}]}"
# @TODO: actually set the vars...
# Create example config
if [[ ! -f "${SRC}"/userpatches/config-example.conf ]]; then
cp "${SRC}"/config/templates/config-example.conf "${SRC}"/userpatches/config-example.conf || exit 1
fi
local set_vars_for_command=""
if [[ "x${ARMBIAN_COMMAND_VARS}x" != "xx" ]]; then
# Loop over them, expanding...
for var_piece in ${ARMBIAN_COMMAND_VARS}; do
local var_decl="declare -g ${var_piece};"
display_alert "Command handler: setting variable" "${var_decl}" "debug"
set_vars_for_command+=" ${var_decl}"
done
fi
# Link default config to example config
if [[ ! -f "${SRC}"/userpatches/config-default.conf ]]; then
ln -fs config-example.conf "${SRC}"/userpatches/config-default.conf || exit 1
fi
local pre_run_function_name="cli_${ARMBIAN_COMMAND_HANDLER}_pre_run"
local run_function_name="cli_${ARMBIAN_COMMAND_HANDLER}_run"
# Create Docker config
if [[ ! -f "${SRC}"/userpatches/config-docker.conf ]]; then
cp "${SRC}"/config/templates/config-docker.conf "${SRC}"/userpatches/config-docker.conf || exit 1
fi
# Reset the functions.
function armbian_cli_pre_run_command() {
display_alert "No pre-run function for command" "${ARMBIAN_COMMAND}" "warn"
}
function armbian_cli_run_command() {
display_alert "No run function for command" "${ARMBIAN_COMMAND}" "warn"
}
# Create Docker file
if [[ ! -f "${SRC}"/userpatches/Dockerfile ]]; then
cp "${SRC}"/config/templates/Dockerfile "${SRC}"/userpatches/Dockerfile || exit 1
fi
# Materialize functions to call that specific command.
if [[ $(type -t "${pre_run_function_name}" || true) == function ]]; then
eval "$(
cat <<- EOF
display_alert "Setting up pre-run function for command" "${ARMBIAN_COMMAND}: ${pre_run_function_name}" "debug"
function armbian_cli_pre_run_command() {
# Set the variables defined in ARMBIAN_COMMAND_VARS
${set_vars_for_command}
display_alert "Calling pre-run function for command" "${ARMBIAN_COMMAND}: ${pre_run_function_name}" "debug"
${pre_run_function_name}
}
EOF
)"
fi
# Create Vagrant config
if [[ ! -f "${SRC}"/userpatches/config-vagrant.conf ]]; then
cp "${SRC}"/config/templates/config-vagrant.conf "${SRC}"/userpatches/config-vagrant.conf || exit 1
fi
# Create Vagrant file
if [[ ! -f "${SRC}"/userpatches/Vagrantfile ]]; then
cp "${SRC}"/config/templates/Vagrantfile "${SRC}"/userpatches/Vagrantfile || exit 1
fi
if [[ $(type -t "${run_function_name}" || true) == function ]]; then
eval "$(
cat <<- EOF
display_alert "Setting up run function for command" "${ARMBIAN_COMMAND}: ${run_function_name}" "debug"
function armbian_cli_run_command() {
# Set the variables defined in ARMBIAN_COMMAND_VARS
${set_vars_for_command}
display_alert "Calling run function for command" "${ARMBIAN_COMMAND}: ${run_function_name}" "debug"
${run_function_name}
}
EOF
)"
fi
}
function parse_each_cmdline_arg_as_command_param_or_config() {
local is_command="no" is_config="no" command_handler conf_path conf_sh_path config_file=""
local argument="${1}"
# lookup if it is a command.
if [[ -n "${ARMBIAN_COMMANDS_TO_HANDLERS_DICT[${argument}]}" ]]; then
is_command="yes"
command_handler="${ARMBIAN_COMMANDS_TO_HANDLERS_DICT[${argument}]}"
display_alert "Found command!" "${argument} is handled by '${command_handler}'" "debug"
fi
# see if we can find config file in userpatches. can be either config-${argument}.conf or config-${argument}.conf.sh
conf_path="${SRC}/userpatches/config-${argument}.conf"
conf_sh_path="${SRC}/userpatches/config-${argument}.conf.sh"
# early safety net: immediately bomb if we find both forms of config. it's too confusing. choose one.
if [[ -f ${conf_path} && -f ${conf_sh_path} ]]; then
exit_with_error "Found both config-${argument}.conf and config-${argument}.conf.sh in userpatches. Please remove one."
exit 1
elif [[ -f ${conf_sh_path} ]]; then
config_file="${conf_sh_path}"
is_config="yes"
elif [[ -f ${conf_path} ]]; then
config_file="${conf_path}"
is_config="yes"
fi
# Sanity check. If we have both a command and a config, bomb.
if [[ "${is_command}" == "yes" && "${is_config}" == "yes" ]]; then
exit_with_error "You cannot have a configuration file named '${config_file}'. '${argument}' is a command name and is reserved for internal Armbian usage. Sorry. Please rename your config file and pass its name it an argument, and I'll use it. PS: You don't need a config file for 'docker' anymore, Docker is all managed by Armbian now."
elif [[ "${is_config}" == "yes" ]]; then # we have a config only
display_alert "Adding config file to list" "${config_file}" "debug"
ARMBIAN_CONFIG_FILES+=("${config_file}") # full path to be sourced
ARMBIAN_CLI_RELAUNCH_CONFIGS+="${argument}" # name reference to be relaunched
elif [[ "${is_command}" == "yes" ]]; then # we have a command, only.
# sanity check. we can't have more than one command. decide!
if [[ -n "${ARMBIAN_COMMAND}" ]]; then
exit_with_error "You cannot specify more than one command. You have '${ARMBIAN_COMMAND}' and '${argument}'. Please decide which one you want to run and pass only that one."
exit 1
fi
ARMBIAN_COMMAND="${argument}" # too early for armbian_prepare_cli_command_to_run "${argument}"
else
# We've an unknown argument. Alert now, bomb later.
ARMBIAN_HAS_UNKNOWN_ARG="yes"
display_alert "Unknown argument" "${argument}" "err"
fi
}
# Produce relaunch parameters. Add the running configs, arguments, and command.
# Declare and use ARMBIAN_CLI_RELAUNCH_ARGS as "${ARMBIAN_CLI_RELAUNCH_ARGS[@]}"
function produce_relaunch_parameters() {
declare -g -a ARMBIAN_CLI_RELAUNCH_ARGS=()
# add the running parameters from ARMBIAN_CLI_RELAUNCH_PARAMS dict
for param in "${!ARMBIAN_CLI_RELAUNCH_PARAMS[@]}"; do
ARMBIAN_CLI_RELAUNCH_ARGS+=("${param}=${ARMBIAN_CLI_RELAUNCH_PARAMS[${param}]}")
done
# add the running configs
for config in ${ARMBIAN_CLI_RELAUNCH_CONFIGS}; do
ARMBIAN_CLI_RELAUNCH_ARGS+=("${config}")
done
display_alert "Produced relaunch args:" "ARMBIAN_CLI_RELAUNCH_ARGS: ${ARMBIAN_CLI_RELAUNCH_ARGS[*]}" "debug"
# @TODO: add the command. if we have one.
}

View File

@ -24,9 +24,21 @@ function do_main_configuration() {
[[ -z $ROOTPWD ]] && ROOTPWD="1234" # Must be changed @first login
[[ -z $MAINTAINER ]] && MAINTAINER="Igor Pecovnik" # deb signature
[[ -z $MAINTAINERMAIL ]] && MAINTAINERMAIL="igor.pecovnik@****l.com" # deb signature
export SKIP_EXTERNAL_TOOLCHAINS="${SKIP_EXTERNAL_TOOLCHAINS:-yes}" # don't use any external toolchains, by default.
TZDATA=$(cat /etc/timezone) # Timezone for target is taken from host or defined here.
USEALLCORES=yes # Use all CPU cores for compiling
DEST_LANG="${DEST_LANG:-"en_US.UTF-8"}" # en_US.UTF-8 is default locale for target
display_alert "DEST_LANG..." "DEST_LANG: ${DEST_LANG}" "debug"
export SKIP_EXTERNAL_TOOLCHAINS="${SKIP_EXTERNAL_TOOLCHAINS:-yes}" # don't use any external toolchains, by default.
# Timezone
if [[ -f /etc/timezone ]]; then # Timezone for target is taken from host, if it exists.
TZDATA=$(cat /etc/timezone)
display_alert "Using host's /etc/timezone for" "TZDATA: ${TZDATA}" "debug"
else
display_alert "Host has no /etc/timezone" "Using Etc/UTC by default" "debug"
TZDATA="Etc/UTC" # If not /etc/timezone at host, default to UTC.
fi
USEALLCORES=yes # Use all CPU cores for compiling
HOSTRELEASE=$(cat /etc/os-release | grep VERSION_CODENAME | cut -d"=" -f2)
[[ -z $HOSTRELEASE ]] && HOSTRELEASE=$(cut -d'/' -f1 /etc/debian_version)
[[ -z $EXIT_PATCHING_ERROR ]] && EXIT_PATCHING_ERROR="" # exit patching if failed
@ -34,8 +46,12 @@ function do_main_configuration() {
cd "${SRC}" || exit
[[ -z "${CHROOT_CACHE_VERSION}" ]] && CHROOT_CACHE_VERSION=7
BUILD_REPOSITORY_URL=$(git remote get-url "$(git remote | grep origin)")
BUILD_REPOSITORY_COMMIT=$(git describe --match=d_e_a_d_b_e_e_f --always --dirty)
if [[ -d "${SRC}/.git" ]]; then
BUILD_REPOSITORY_URL=$(git remote get-url "$(git remote | grep origin)")
BUILD_REPOSITORY_COMMIT=$(git describe --match=d_e_a_d_b_e_e_f --always --dirty)
fi
ROOTFS_CACHE_MAX=200 # max number of rootfs cache, older ones will be cleaned up
# .deb compression. xz is standard, but is slow, so if avoided by default if not running in CI. one day, zstd.
@ -397,6 +413,7 @@ desktop/${RELEASE}/environments/${DESKTOP_ENVIRONMENT}/appgroups
PACKAGE_MAIN_LIST="$(cleanup_list PACKAGE_LIST)"
[[ $BUILD_DESKTOP == yes ]] && PACKAGE_LIST="$PACKAGE_LIST $PACKAGE_LIST_DESKTOP"
# @TODO: what is the use of changing PACKAGE_LIST after PACKAGE_MAIN_LIST was set?
PACKAGE_LIST="$(cleanup_list PACKAGE_LIST)"
# remove any packages defined in PACKAGE_LIST_RM in lib.config
@ -455,7 +472,11 @@ function write_config_summary_output_file() {
local debug_dpkg_arch debug_uname debug_virt debug_src_mount debug_src_perms debug_src_temp_perms
debug_dpkg_arch="$(dpkg --print-architecture)"
debug_uname="$(uname -a)"
debug_virt="$(systemd-detect-virt || true)"
# We might not have systemd-detect-virt, specially inside docker. Docker images have no systemd...
debug_virt="unknown-nosystemd"
if [[ -n "$(command -v systemd-detect-virt)" ]]; then
debug_virt="$(systemd-detect-virt || true)"
fi
debug_src_mount="$(findmnt --output TARGET,SOURCE,FSTYPE,AVAIL --target "${SRC}" --uniq)"
debug_src_perms="$(getfacl -p "${SRC}")"
debug_src_temp_perms="$(getfacl -p "${SRC}"/.tmp 2> /dev/null)"

View File

@ -15,14 +15,14 @@
fel_prepare_host() {
# Start rpcbind for NFS if inside docker container
[ "$(systemd-detect-virt)" == 'docker' ] && service rpcbind start
if armbian_is_running_in_container; then service rpcbind start; fi
# remove and re-add NFS share
rm -f /etc/exports.d/armbian.exports
mkdir -p /etc/exports.d
echo "$FEL_ROOTFS *(rw,async,no_subtree_check,no_root_squash,fsid=root)" > /etc/exports.d/armbian.exports
# Start NFS server if inside docker container
[ "$(systemd-detect-virt)" == 'docker' ] && service nfs-kernel-server start
if armbian_is_running_in_container; then service nfs-kernel-server; fi
exportfs -ra
}

View File

@ -30,9 +30,13 @@ function improved_git_fetch() {
# workaround new limitations imposed by CVE-2022-24765 fix in git, otherwise "fatal: unsafe repository"
function git_ensure_safe_directory() {
local git_dir="$1"
display_alert "git: Marking directory as safe" "$git_dir" "debug"
run_host_command_logged git config --global --add safe.directory "$git_dir"
if [[ -n "$(command -v git)" ]]; then
local git_dir="$1"
display_alert "git: Marking directory as safe" "$git_dir" "debug"
run_host_command_logged git config --global --add safe.directory "$git_dir"
else
display_alert "git not installed" "a true wonder how you got this far without git - it will be installed for you" "warn"
fi
}
# fetch_from_repo <url> <directory> <ref> <ref_subdir>

View File

@ -1,6 +1,7 @@
# Management of apt-cacher-ng aka acng
function acng_configure_and_restart_acng() {
if ! armbian_is_host_running_systemd; then return 0; fi # do nothing if host is not running systemd
[[ $NO_APT_CACHER == yes ]] && return 0 # don't if told not to. NO_something=yes is very confusing, but kept for historical reasons
[[ "${APT_PROXY_ADDR:-localhost:3142}" != "localhost:3142" ]] && return 0 # also not if acng not local to builder machine

View File

@ -16,6 +16,9 @@ prepare_host_basic() {
"curl:curl"
"gpg:gnupg"
"gawk:gawk"
"linux-version:linux-base"
"locale-gen:locales"
"git:git"
)
for check_pack in "${checklist[@]}"; do
@ -23,9 +26,17 @@ prepare_host_basic() {
done
if [[ -n $install_pack ]]; then
display_alert "Updating and installing basic packages on host" "$install_pack"
run_host_command_logged sudo apt-get -qq update
run_host_command_logged sudo apt-get install -qq -y --no-install-recommends $install_pack
# This obviously only works on Debian or Ubuntu.
if [[ ! -f /etc/debian_version ]]; then
exit_with_error "Missing packages -- can't install basic packages on non Debian/Ubuntu"
fi
local sudo_prefix="" && is_root_or_sudo_prefix sudo_prefix # nameref; "sudo_prefix" will be 'sudo' or ''
display_alert "Updating and installing basic packages on host" "${sudo_prefix}: ${install_pack}"
run_host_command_logged "${sudo_prefix}" apt-get -qq update
run_host_command_logged "${sudo_prefix}" apt-get install -qq -y --no-install-recommends $install_pack
else
display_alert "basic-deps are already installed on host" "nothing to be done" "debug"
fi
}

364
lib/functions/host/docker.sh Executable file
View File

@ -0,0 +1,364 @@
#############################################################################################################
# @TODO: called by no-one, yet.
function check_and_install_docker_daemon() {
# @TODO: sincerely, not worth keeping this. Send user to Docker install docs. `adduser $USER docker` is important on Linux.
# Install Docker if not there but wanted. We cover only Debian based distro install. On other distros, manual Docker install is needed
if [[ "${1}" == docker && -f /etc/debian_version && -z "$(command -v docker)" ]]; then
DOCKER_BINARY="docker-ce"
# add exception for Ubuntu Focal until Docker provides dedicated binary
codename=$(cat /etc/os-release | grep VERSION_CODENAME | cut -d"=" -f2)
codeid=$(cat /etc/os-release | grep ^NAME | cut -d"=" -f2 | awk '{print tolower($0)}' | tr -d '"' | awk '{print $1}')
[[ "${codename}" == "debbie" ]] && codename="buster" && codeid="debian"
[[ "${codename}" == "ulyana" || "${codename}" == "jammy" ]] && codename="focal" && codeid="ubuntu"
# different binaries for some. TBD. Need to check for all others
[[ "${codename}" =~ focal|hirsute ]] && DOCKER_BINARY="docker containerd docker.io"
display_alert "Docker not installed." "Installing" "Info"
sudo bash -c "echo \"deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/${codeid} ${codename} stable\" > /etc/apt/sources.list.d/docker.list"
sudo bash -c "curl -fsSL \"https://download.docker.com/linux/${codeid}/gpg\" | apt-key add -qq - > /dev/null 2>&1 "
export DEBIAN_FRONTEND=noninteractive
sudo apt-get update
sudo apt-get install -y -qq --no-install-recommends ${DOCKER_BINARY}
display_alert "Add yourself to docker group to avoid root privileges" "" "wrn"
"${SRC}/compile.sh" "$@"
exit $?
fi
}
# Usage: if is_docker_ready_to_go; then ...; fi
function is_docker_ready_to_go() {
# For either Linux or Darwin.
# Gotta tick all these boxes:
# 0) NOT ALREADY UNDER DOCKER.
# 1) can find the `docker` command in the path, via command -v
# 2) can run `docker info` without errors
if [[ "$ARMBIAN_RUNNING_IN_CONTAINER}" == "yes" ]]; then
display_alert "Can't use Docker" "Actually ALREADY UNDER DOCKER!" "debug"
return 1
fi
if [[ ! -n "$(command -v docker)" ]]; then
display_alert "Can't use Docker" "docker command not found" "debug"
return 1
fi
if ! docker info > /dev/null 2>&1; then
display_alert "Can't use Docker" "docker info failed" "debug"
return 1
fi
# If we get here, we're good to go.
return 0
}
# Called by the cli-entrypoint. At this moment ${1} is already shifted; we know it via ${DOCKER_SUBCMD} now.
function cli_handle_docker() {
display_alert "Handling" "docker" "info"
exit 0
# Purge Armbian Docker images
if [[ "${1}" == dockerpurge && -f /etc/debian_version ]]; then
display_alert "Purging Armbian Docker containers" "" "wrn"
docker container ls -a | grep armbian | awk '{print $1}' | xargs docker container rm &> /dev/null
docker image ls | grep armbian | awk '{print $3}' | xargs docker image rm &> /dev/null
# removes "dockerpurge" from $1, thus $2 becomes $1
shift
set -- "docker" "$@"
fi
# Docker shell
if [[ "${1}" == docker-shell ]]; then
# this swaps the value of $1 with 'docker', and life continues
shift
SHELL_ONLY=yes
set -- "docker" "$@"
fi
}
function docker_cli_prepare() {
# @TODO: Make sure we can access docker, on Linux; gotta be part of 'docker' group: grep -q "$(whoami)" <(getent group docker)
declare -g DOCKER_ARMBIAN_INITIAL_IMAGE_TAG="armbian.local.only/armbian-build:initial"
#declare -g DOCKER_ARMBIAN_BASE_IMAGE="${DOCKER_ARMBIAN_BASE_IMAGE:-"debian:bookworm"}" # works Linux & Darwin
#declare -g DOCKER_ARMBIAN_BASE_IMAGE="${DOCKER_ARMBIAN_BASE_IMAGE:-"debian:sid"}" # works Linux & Darwin
#declare -g DOCKER_ARMBIAN_BASE_IMAGE="${DOCKER_ARMBIAN_BASE_IMAGE:-"debian:bullseye"}" # does NOT work under Darwin? loop problems.
declare -g DOCKER_ARMBIAN_BASE_IMAGE="${DOCKER_ARMBIAN_BASE_IMAGE:-"ubuntu:jammy"}" # works Linux & Darwin
declare -g DOCKER_ARMBIAN_TARGET_PATH="${DOCKER_ARMBIAN_TARGET_PATH:-"/armbian"}"
# If we're NOT building the public, official image, then USE the public, official image as base.
# IMPORTANT: This has to match the naming scheme for tag the is used in the GitHub actions workflow.
if [[ "${DOCKERFILE_USE_ARMBIAN_IMAGE_AS_BASE}" != "no" ]]; then
local wanted_os_tag="${DOCKER_ARMBIAN_BASE_IMAGE%%:*}"
local wanted_release_tag="${DOCKER_ARMBIAN_BASE_IMAGE##*:}"
# @TODO: this is rpardini's build. It's done in a different repo, so that's why the strange "armbian-release" name. It should be armbian/build:ubuntu-jammy-latest or something.
DOCKER_ARMBIAN_BASE_IMAGE="ghcr.io/rpardini/armbian-release:armbian-next-${wanted_os_tag}-${wanted_release_tag}-latest"
display_alert "Using prebuilt Armbian image as base for '${wanted_os_tag}-${wanted_release_tag}'" "DOCKER_ARMBIAN_BASE_IMAGE: ${DOCKER_ARMBIAN_BASE_IMAGE}" "info"
fi
# @TODO: this might be unified with prepare_basic_deps
declare -g -a BASIC_DEPS=("bash" "git" "psmisc" "uuid-runtime")
#############################################################################################################
# Prepare some dependencies; these will be used on the Dockerfile
declare -a -g host_dependencies=()
REQUIREMENTS_DEFS_ONLY=yes early_prepare_host_dependencies
display_alert "Pre-game dependencies" "${host_dependencies[*]}" "debug"
#############################################################################################################
# Detect some docker info.
DOCKER_SERVER_VERSION="$(docker info | grep -i -e "Server Version\:" | cut -d ":" -f 2 | xargs echo -n)"
display_alert "Docker Server version" "${DOCKER_SERVER_VERSION}" "debug"
DOCKER_SERVER_KERNEL_VERSION="$(docker info | grep -i -e "Kernel Version\:" | cut -d ":" -f 2 | xargs echo -n)"
display_alert "Docker Server Kernel version" "${DOCKER_SERVER_KERNEL_VERSION}" "debug"
DOCKER_SERVER_TOTAL_RAM="$(docker info | grep -i -e "Total memory\:" | cut -d ":" -f 2 | xargs echo -n)"
display_alert "Docker Server Total RAM" "${DOCKER_SERVER_TOTAL_RAM}" "debug"
DOCKER_SERVER_CPUS="$(docker info | grep -i -e "CPUs\:" | cut -d ":" -f 2 | xargs echo -n)"
display_alert "Docker Server CPUs" "${DOCKER_SERVER_CPUS}" "debug"
DOCKER_SERVER_OS="$(docker info | grep -i -e "Operating System\:" | cut -d ":" -f 2 | xargs echo -n)"
display_alert "Docker Server OS" "${DOCKER_SERVER_OS}" "debug"
declare -g DOCKER_ARMBIAN_HOST_OS_UNAME
DOCKER_ARMBIAN_HOST_OS_UNAME="$(uname)"
display_alert "Local uname" "${DOCKER_ARMBIAN_HOST_OS_UNAME}" "debug"
DOCKER_BUILDX_VERSION="$(docker info | grep -i -e "buildx\:" | cut -d ":" -f 2 | xargs echo -n)"
display_alert "Docker Buildx version" "${DOCKER_BUILDX_VERSION}" "debug"
declare -g DOCKER_HAS_BUILDX=no
declare -g -a DOCKER_BUILDX_OR_BUILD=("build")
if [[ -n "${DOCKER_BUILDX_VERSION}" ]]; then
DOCKER_HAS_BUILDX=yes
DOCKER_BUILDX_OR_BUILD=("buildx" "build" "--progress=plain")
fi
display_alert "Docker has buildx?" "${DOCKER_HAS_BUILDX}" "debug"
# Info summary message. Thank you, GitHub Co-pilot!
display_alert "Docker info" "Docker ${DOCKER_SERVER_VERSION} Kernel:${DOCKER_SERVER_KERNEL_VERSION} RAM:${DOCKER_SERVER_TOTAL_RAM} CPUs:${DOCKER_SERVER_CPUS} OS:'${DOCKER_SERVER_OS}' under '${DOCKER_ARMBIAN_HOST_OS_UNAME}' - buildx ${DOCKER_HAS_BUILDX}" "sysinfo"
# @TODO: grab git info, add as labels et al to Docker... (already done in GHA workflow)
display_alert "Creating" ".dockerignore" "info"
cat <<- DOCKERIGNORE > "${SRC}"/.dockerignore
# Start by ignoring everything
*
# Include certain files and directories; mostly the build system, but not other parts.
!/VERSION
!/LICENSE
!/compile.sh
!/lib
!/extensions
!/config/sources
!/config/templates
# Ignore unnecessary files inside include directories
# This should go after the include directories
**/*~
**/*.log
**/.DS_Store
DOCKERIGNORE
display_alert "Creating" "Dockerfile" "info"
cat <<- INITIAL_DOCKERFILE > "${SRC}"/Dockerfile
FROM ${DOCKER_ARMBIAN_BASE_IMAGE}
# PLEASE DO NOT MODIFY THIS FILE. IT IS AUTOGENERATED AND WILL BE OVERWRITTEN. Please don't build this Dockerfile yourself either. Use Armbian helpers instead.
RUN echo "--> CACHE MISS IN DOCKERFILE: apt packages." && \
DEBIAN_FRONTEND=noninteractive apt-get -y update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends ${BASIC_DEPS[@]} ${host_dependencies[@]}
WORKDIR ${DOCKER_ARMBIAN_TARGET_PATH}
ENV ARMBIAN_RUNNING_IN_CONTAINER=yes
ADD . ${DOCKER_ARMBIAN_TARGET_PATH}/
RUN echo "--> CACHE MISS IN DOCKERFILE: running Armbian requirements initialization." && \
/bin/bash "${DOCKER_ARMBIAN_TARGET_PATH}/compile.sh" requirements SHOW_LOG=yes && \
rm -rf "${DOCKER_ARMBIAN_TARGET_PATH}/output" "${DOCKER_ARMBIAN_TARGET_PATH}/.tmp" "${DOCKER_ARMBIAN_TARGET_PATH}/cache"
INITIAL_DOCKERFILE
}
function docker_cli_build_dockerfile() {
local do_force_pull="no"
local local_image_sha
mkdir -p "${SRC}"/cache/docker
# Find files under "${SRC}"/cache/docker that are older than 1 day, and delete them.
EXPIRED_MARKER="$(find "${SRC}"/cache/docker -type f -mtime +1 -exec echo -n {} \;)"
display_alert "Expired marker?" "${EXPIRED_MARKER}" "debug"
if [[ "x${EXPIRED_MARKER}x" != "xx" ]]; then
display_alert "More than" "1 day since last pull, pulling again" "info"
do_force_pull="yes"
fi
if [[ "${do_force_pull}" == "no" ]]; then
# Check if the base image is up to date.
local_image_sha="$(docker images --no-trunc --quiet "${DOCKER_ARMBIAN_BASE_IMAGE}")"
display_alert "Checking if base image exists at all" "local_image_sha: '${local_image_sha}'" "debug"
if [[ -n "${local_image_sha}" ]]; then
display_alert "Armbian docker image" "already exists: ${DOCKER_ARMBIAN_BASE_IMAGE}" "info"
else
display_alert "Armbian docker image" "does not exist: ${DOCKER_ARMBIAN_BASE_IMAGE}" "info"
do_force_pull="yes"
fi
fi
if [[ "${do_force_pull:-yes}" == "yes" ]]; then
display_alert "Pulling" "${DOCKER_ARMBIAN_BASE_IMAGE}" "info"
run_host_command_logged docker pull "${DOCKER_ARMBIAN_BASE_IMAGE}"
local_image_sha="$(docker images --no-trunc --quiet "${DOCKER_ARMBIAN_BASE_IMAGE}")"
display_alert "New local image sha after pull" "local_image_sha: ${local_image_sha}" "debug"
# print current date and time in epoch format; touches mtime of file
echo "${DOCKER_ARMBIAN_BASE_IMAGE}|${local_image_sha}|$(date +%s)" >> "${SRC}"/cache/docker/last-pull
fi
display_alert "Building" "Dockerfile via '${DOCKER_BUILDX_OR_BUILD[*]}'" "info"
BUILDKIT_COLORS="run=123,20,245:error=yellow:cancel=blue:warning=white" \
run_host_command_logged docker "${DOCKER_BUILDX_OR_BUILD[@]}" -t "${DOCKER_ARMBIAN_INITIAL_IMAGE_TAG}" -f "${SRC}"/Dockerfile "${SRC}"
}
function docker_cli_prepare_launch() {
# array for the generic armbian 'volumes' and their paths. Less specific first.
# @TODO: actually use this for smth.
declare -A -g DOCKER_ARMBIAN_VOLUMES=(
[".tmp"]="linux=anonymous darwin=anonymous" # tmpfs, discard, anonymous; whatever you wanna call it. It just needs to be 100% local to the container, and there's very little value in being able to look at it from the host.
["output"]="linux=bind darwin=bind" # catch-all output. specific subdirs are mounted below. it's a bind mount by default on both Linux and Darwin.
["output/images"]="linux=bind darwin=bind" # 99% of users want this as the result of their build, no matter if it's slow or not. bind on both.
["output/debs"]="linux=bind darwin=namedvolume" # generated output .deb files. not everyone is interested in this: most users just want images. Linux has fast binds, so bound by default. Darwin has slow binds, so it's a volume by default.
["output/logs"]="linux=bind darwin=bind" # log files produced. 100% of users want this. Bind on both Linux and Darwin. Is used to integrate launcher and actual-build logs, so must exist and work otherwise confusion ensues.
["cache"]="linux=bind darwin=namedvolume" # catch-all cache, could be bind-mounted or a volume. On Darwin it's too slow to bind-mount, so it's a volume by default. On Linux, it's a bind-mount by default.
["cache/gitballs"]="linux=bind darwin=namedvolume" # tarballs of git repos, can be bind-mounted or a volume. On Darwin it's too slow to bind-mount, so it's a volume by default. On Linux, it's a bind-mount by default.
["cache/toolchain"]="linux=bind darwin=namedvolume" # toolchain cache, can be bind-mounted or a volume. On Darwin it's too slow to bind-mount, so it's a volume by default. On Linux, it's a bind-mount by default.
["cache/aptcache"]="linux=bind darwin=namedvolume" # .deb apt cache, replaces apt-cacher-ng. Can be bind-mounted or a volume. On Darwin it's too slow to bind-mount, so it's a volume by default. On Linux, it's a bind-mount by default.
["cache/rootfs"]="linux=bind darwin=namedvolume" # rootfs .tar.zst cache, can be bind-mounted or a volume. On Darwin it's too slow to bind-mount, so it's a volume by default. On Linux, it's a bind-mount by default.
["cache/initrd"]="linux=bind darwin=namedvolume" # initrd.img cache, can be bind-mounted or a volume. On Darwin it's too slow to bind-mount, so it's a volume by default. On Linux, it's a bind-mount by default.
["cache/sources"]="linux=bind darwin=namedvolume" # operating directory. many things are cloned in here, and some are even built inside. needs to be local to the container, so it's a volume by default. On Linux, it's a bind-mount by default.
["cache/sources/linux-kernel"]="linux=bind darwin=namedvolume" # working tree for kernel builds. huge. contains both sources and the built object files. needs to be local to the container, so it's a volume by default. On Linux, it's a bind-mount by default.
)
display_alert "Preparing" "common Docker arguments" "debug"
declare -g -a DOCKER_ARGS=(
"--rm" # side effect - named volumes are considered not attached to anything and are removed on "docker volume prune", since container was removed.
"--privileged" # Running this container in privileged mode is a simple way to solve loop device access issues, required for USB FEL or when writing image directly to the block device, when CARD_DEVICE is defined
"--cap-add=SYS_ADMIN" # add only required capabilities instead
"--cap-add=MKNOD" # (though MKNOD should be already present)
"--cap-add=SYS_PTRACE" # CAP_SYS_PTRACE is required for systemd-detect-virt in some cases @TODO: rpardini: so lets eliminate it
# "--mount" "type=bind,source=${SRC}/lib,target=${DOCKER_ARMBIAN_TARGET_PATH}/lib"
# type=volume, without source=, is an anonymous volume -- will be auto cleaned up together with the container;
# this could also be a type=tmpfs if you had enough ram - but armbian already does tmpfs for you if you
# have enough RAM (inside the container) so don't bother.
"--mount" "type=volume,destination=${DOCKER_ARMBIAN_TARGET_PATH}/.tmp"
# named volumes for different parts of the cache. so easy for user to drop any of them when needed
# @TODO: refactor this; this is only ideal for Darwin right now. Use DOCKER_ARMBIAN_VOLUMES to generate this.
"--mount" "type=volume,source=armbian-cache-parent,destination=${DOCKER_ARMBIAN_TARGET_PATH}/cache"
"--mount" "type=volume,source=armbian-cache-gitballs,destination=${DOCKER_ARMBIAN_TARGET_PATH}/cache/gitballs"
"--mount" "type=volume,source=armbian-cache-toolchain,destination=${DOCKER_ARMBIAN_TARGET_PATH}/cache/toolchain"
"--mount" "type=volume,source=armbian-cache-aptcache,destination=${DOCKER_ARMBIAN_TARGET_PATH}/cache/aptcache"
"--mount" "type=volume,source=armbian-cache-rootfs,destination=${DOCKER_ARMBIAN_TARGET_PATH}/cache/rootfs"
"--mount" "type=volume,source=armbian-cache-initrd,destination=${DOCKER_ARMBIAN_TARGET_PATH}/cache/initrd"
"--mount" "type=volume,source=armbian-cache-sources,destination=${DOCKER_ARMBIAN_TARGET_PATH}/cache/sources"
"--mount" "type=volume,source=armbian-cache-sources-linux-kernel,destination=${DOCKER_ARMBIAN_TARGET_PATH}/cache/sources/linux-kernel"
# Pass env var ARMBIAN_RUNNING_IN_CONTAINER to indicate we're running under Docker. This is also set in the Dockerfile; make sure.
"--env" "ARMBIAN_RUNNING_IN_CONTAINER=yes"
)
# @TODO: auto-compute this list; just get the dirs and filter some out
for MOUNT_DIR in "lib" "config" "extensions" "packages" "patch" "tools" "userpatches" "output"; do
mkdir -p "${SRC}/${MOUNT_DIR}"
DOCKER_ARGS+=("--mount" "type=bind,source=${SRC}/${MOUNT_DIR},target=${DOCKER_ARMBIAN_TARGET_PATH}/${MOUNT_DIR}")
done
# eg: NOT on Darwin with Docker Desktop, that works simply with --priviledged and the extra caps.
# those actually _break_ Darwin with Docker Desktop, so we need to detect that.
if [[ "${DOCKER_ARMBIAN_HOST_OS_UNAME}" == "Linux" ]]; then
display_alert "Adding /dev/loop* hacks for" "${DOCKER_ARMBIAN_HOST_OS_UNAME}" "debug"
DOCKER_ARGS+=("--security-opt=apparmor:unconfined") # mounting things inside the container on Ubuntu won't work without this https://github.com/moby/moby/issues/16429#issuecomment-217126586
DOCKER_ARGS+=(--device-cgroup-rule='b 7:* rmw') # allow loop devices (not required)
DOCKER_ARGS+=(--device-cgroup-rule='b 259:* rmw') # allow loop device partitions
DOCKER_ARGS+=(-v /dev:/tmp/dev:ro) # this is an ugly hack (CONTAINER_COMPAT=y), but it is required to get /dev/loopXpY minor number for mknod inside the container, and container itself still uses private /dev internally
for loop_device_host in /dev/loop*; do # pass through loop devices from host to container; includes `loop-control`
DOCKER_ARGS+=("--device=${loop_device_host}")
done
else
display_alert "Skipping /dev/loop* hacks for" "${DOCKER_ARMBIAN_HOST_OS_UNAME}" "debug"
fi
}
function docker_cli_launch() {
display_alert "Showing Docker cmdline" "Docker args: '${DOCKER_ARGS[*]}'" "debug"
display_alert "Relaunching in Docker" "${*}" "debug"
display_alert "Relaunching in Docker" "here comes the 🐳" "info"
local -i docker_build_result=1
if docker run -it "${DOCKER_ARGS[@]}" "${DOCKER_ARMBIAN_INITIAL_IMAGE_TAG}" /bin/bash "${DOCKER_ARMBIAN_TARGET_PATH}/compile.sh" "$@"; then
display_alert "Docker Build finished" "successfully" "info"
docker_build_result=0
else
display_alert "Docker Build failed" "with errors" "err"
fi
# Find and show the path to the log file for the ARMBIAN_BUILD_UUID.
local logs_path="${DEST}/logs" log_file
log_file="$(find "${logs_path}" -type f -name "*${ARMBIAN_BUILD_UUID}*.*" -print -quit)"
display_alert "Build log done inside Docker" "${log_file}" "info"
# Show and help user understand space usage in Docker volumes.
# This is done in a loop; `docker df` fails sometimes (for no good reason).
docker_cli_show_armbian_volumes_disk_usage
return ${docker_build_result}
}
function docker_cli_show_armbian_volumes_disk_usage() {
display_alert "Gathering docker volumes disk usage" "docker system df, wait..." "debug"
sleep_seconds="1" silent_retry="yes" do_with_retries 5 docker_cli_show_armbian_volumes_disk_usage_internal || {
display_alert "Could not get Docker volumes disk usage" "docker failed to report disk usage" "warn"
return 0 # not really a problem, just move on.
}
local docker_volume_usage
docker_volume_usage="$(docker system df -v | grep -e "^armbian-cache" | grep -v "\b0B" | tr -s " " | cut -d " " -f 1,3 | tr " " ":" | xargs echo || true)"
display_alert "Docker Armbian volume usage" "${docker_volume_usage}" "info"
}
function docker_cli_show_armbian_volumes_disk_usage_internal() {
# This fails sometimes, for no reason. Test it.
if docker system df -v &> /dev/null; then
return 0
else
return 1
fi
}
# Leftovers from original Dockerfile before rewrite
## OLD DOCKERFILE ## RUN locale-gen en_US.UTF-8
## OLD DOCKERFILE ##
## OLD DOCKERFILE ## # Static port for NFSv3 server used for USB FEL boot
## OLD DOCKERFILE ## RUN sed -i 's/\(^STATDOPTS=\).*/\1"--port 32765 --outgoing-port 32766"/' /etc/default/nfs-common \
## OLD DOCKERFILE ## && sed -i 's/\(^RPCMOUNTDOPTS=\).*/\1"--port 32767"/' /etc/default/nfs-kernel-server
## OLD DOCKERFILE ##
## OLD DOCKERFILE ## ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' LC_ALL='en_US.UTF-8' TERM=screen
## OLD DOCKERFILE ## WORKDIR /root/armbian
## OLD DOCKERFILE ## LABEL org.opencontainers.image.source="https://github.com/armbian/build/blob/master/config/templates/Dockerfile" \
## OLD DOCKERFILE ## org.opencontainers.image.url="https://github.com/armbian/build/pkgs/container/build" \
## OLD DOCKERFILE ## org.opencontainers.image.vendor="armbian" \
## OLD DOCKERFILE ## org.opencontainers.image.title="Armbian build framework" \
## OLD DOCKERFILE ## org.opencontainers.image.description="Custom Linux build framework" \
## OLD DOCKERFILE ## org.opencontainers.image.documentation="https://docs.armbian.com" \
## OLD DOCKERFILE ## org.opencontainers.image.authors="Igor Pecovnik" \
## OLD DOCKERFILE ## org.opencontainers.image.licenses="GPL-2.0"
## OLD DOCKERFILE ## ENTRYPOINT [ "/bin/bash", "/root/armbian/compile.sh" ]

View File

@ -65,3 +65,148 @@ function install_host_side_packages() {
unset currently_installed_packages
return 0
}
function is_root_or_sudo_prefix() {
declare -n __my_sudo_prefix=${1} # nameref...
if [[ "${EUID}" == "0" ]]; then
# do not use sudo if we're effectively already root
display_alert "EUID=0, so" "we're already root!" "debug"
__my_sudo_prefix=""
elif [[ -n "$(command -v sudo)" ]]; then
# sudo binary found in path, use it.
display_alert "EUID is not 0" "sudo binary found, using it" "debug"
__my_sudo_prefix="sudo"
else
# No root and no sudo binary. Bail out
exit_with_error "EUID is not 0 and no sudo binary found - Please install sudo or run as root"
fi
return 0
}
# Usage: local_apt_deb_cache_prepare variable_for_use_yes_no variable_for_cache_dir "when you are using cache/before doing XX/after YY"
function local_apt_deb_cache_prepare() {
declare -n __my_use_yes_or_no=${1} # nameref...
declare -n __my_apt_cache_host_dir=${2} # nameref...
declare when_used="${3}"
__my_use_yes_or_no="no"
if [[ "${USE_LOCAL_APT_DEB_CACHE}" != "yes" ]]; then
# Not using the local cache, do nothing. Just return "no" in the first nameref.
return 0
fi
__my_use_yes_or_no="yes"
__my_apt_cache_host_dir="${SRC}/cache/aptcache/${RELEASE}-${ARCH}"
mkdir -p "${__my_apt_cache_host_dir}" "${__my_apt_cache_host_dir}/archives"
# get the size, in bytes, of the cache directory, including subdirs
declare -i cache_size # heh, mark var as integer
cache_size=$(du -sb "${__my_apt_cache_host_dir}" | cut -f1)
display_alert "Size of apt/deb cache ${when_used}" "${cache_size} bytes" "debug"
declare -g -i __previous_apt_cache_size
if [[ -z "${__previous_apt_cache_size}" ]]; then
# first time, set the size to 0
__previous_apt_cache_size=0
else
# not first time, check if the size has changed
if [[ "${cache_size}" -ne "${__previous_apt_cache_size}" ]]; then
display_alert "Local apt cache size changed ${when_used}" "from ${__previous_apt_cache_size} to ${cache_size} bytes" "debug"
else
display_alert "Local apt cache size unchanged ${when_used}" "at ${cache_size} bytes" "debug"
fi
fi
__previous_apt_cache_size=${cache_size}
return 0
}
# usage: if armbian_is_host_running_systemd; then ... fi
function armbian_is_host_running_systemd() {
# Detect if systemctl is available in the path
if [[ -n "$(command -v systemctl)" ]]; then
display_alert "systemctl binary found" "host has systemd installed" "debug"
# Detect if systemd is actively running
if systemctl | grep -q 'running'; then
display_alert "systemctl reports" "systemd is running" "debug"
return 0
else
display_alert "systemctl binary found" "but systemd is not running" "debug"
return 1
fi
else
display_alert "systemctl binary not found" "host does not have systemd installed" "debug"
fi
# Not running with systemd, return 1.
display_alert "Systemd not detected" "host is not running systemd" "debug"
return 1
}
# usage: if armbian_is_running_in_container; then ... fi
function armbian_is_running_in_container() {
# First, check an environment variable. This is passed by the docker launchers, and also set in the Dockerfile, so should be authoritative.
if [[ "${ARMBIAN_RUNNING_IN_CONTAINER}" == "yes" ]]; then
display_alert "ARMBIAN_RUNNING_IN_CONTAINER is set to 'yes' in the environment" "so we're running in a container/Docker" "debug"
return 0
fi
# Second, check the hardcoded path `/.dockerenv` -- not all Docker images have this, but if they do, we're pretty sure it is under Docker.
if [[ -f "/.dockerenv" ]]; then
display_alert "File /.dockerenv exists" "so we're running in a container/Docker" "debug"
return 0
fi
# Third: if host is actively running systemd (not just installed), it's very _unlikely_ that we're running under Docker. bail.
if armbian_is_host_running_systemd; then
display_alert "Host is running systemd" "so we're not running in a container/Docker" "debug"
return 1
fi
# Fourth, if `systemd-detect-virt` is available in the path, and executing it returns "docker", we're pretty sure it is under Docker.
if [[ -n "$(command -v systemd-detect-virt)" ]]; then
local systemd_detect_virt_output
systemd_detect_virt_output="$(systemd-detect-virt)"
if [[ "${systemd_detect_virt_output}" == "docker" ]]; then
display_alert "systemd-detect-virt says we're running in a container/Docker" "so we're running in a container/Docker" "debug"
return 0
else
display_alert "systemd-detect-virt says we're running on '${systemd_detect_virt_output}'" "so we're not running in a container/Docker" "debug"
fi
fi
# End of the line. I've nothing else to check here. We're not running in a container/Docker.
display_alert "No evidence found that we're running in a container/Docker" "so we're not running in a container/Docker" "debug"
return 1
}
# This does `mkdir -p` on the parameters, and also sets it to be owned by the correct UID.
# Call: armbian_mkdir_p_and_chown_to_user "dir1" "dir2" "dir3/dir4"
function mkdir_recursive_and_set_uid_owner() {
# loop over args...
for dir in "$@"; do
mkdir -p "${dir}"
reset_uid_owner "${dir}"
done
}
# Call: reset_uid_owner "one/file" "some/directory" "another/file"
function reset_uid_owner() {
if [[ "x${SET_OWNER_TO_UID}x" == "xx" ]]; then
return 0 # Nothing to do.
fi
# Loop over args..
local arg
for arg in "$@"; do
display_alert "reset_uid_owner: '${arg}' will be owner id '${SET_OWNER_TO_UID}'" "reset_uid_owner" "debug"
if [[ -d "${arg}" ]]; then
chown -R "${SET_OWNER_TO_UID}" "${arg}"
elif [[ -f "${arg}" ]]; then
chown "${SET_OWNER_TO_UID}" "${arg}"
else
display_alert "reset_uid_owner: '${arg}' is not a file or directory" "skipping" "debug"
return 1
fi
done
}

View File

@ -18,10 +18,15 @@ prepare_host() {
# wait until package manager finishes possible system maintanace
wait_for_package_manager
# fix for Locales settings
if ! grep -q "^en_US.UTF-8 UTF-8" /etc/locale.gen; then
sudo sed -i 's/# en_US.UTF-8/en_US.UTF-8/' /etc/locale.gen
sudo locale-gen
# fix for Locales settings, if locale-gen is installed, and /etc/locale.gen exists.
if [[ -n "$(command -v locale-gen)" && -f /etc/locale.gen ]]; then
if ! grep -q "^en_US.UTF-8 UTF-8" /etc/locale.gen; then
local sudo_prefix="" && is_root_or_sudo_prefix sudo_prefix # nameref; "sudo_prefix" will be 'sudo' or ''
${sudo_prefix} sed -i 's/# en_US.UTF-8/en_US.UTF-8/' /etc/locale.gen
${sudo_prefix} locale-gen
fi
else
display_alert "locale-gen is not installed @host" "skipping locale-gen -- problems might arise" "warn"
fi
export LC_ALL="en_US.UTF-8"
@ -80,42 +85,30 @@ prepare_host() {
fi
fi
if systemd-detect-virt -q -c; then
display_alert "Running in container" "$(systemd-detect-virt)" "info"
declare -g USE_LOCAL_APT_DEB_CACHE=${USE_LOCAL_APT_DEB_CACHE:-yes} # Use SRC/cache/aptcache as local apt cache by default
display_alert "Using local apt cache?" "USE_LOCAL_APT_DEB_CACHE: ${USE_LOCAL_APT_DEB_CACHE}" "debug"
if armbian_is_running_in_container; then
display_alert "Running in container" "Adding provisions for container building" "info"
declare -g CONTAINER_COMPAT=yes # this controls mknod usage for loop devices.
# disable apt-cacher unless NO_APT_CACHER=no is not specified explicitly
if [[ $NO_APT_CACHER != no ]]; then
display_alert "apt-cacher is disabled in containers, set NO_APT_CACHER=no to override" "" "wrn"
NO_APT_CACHER=yes
fi
CONTAINER_COMPAT=yes
# trying to use nested containers is not a good idea, so don't permit EXTERNAL_NEW=compile
if [[ $EXTERNAL_NEW == compile ]]; then
display_alert "EXTERNAL_NEW=compile is not available when running in container, setting to prebuilt" "" "wrn"
EXTERNAL_NEW=prebuilt
fi
SYNC_CLOCK=no
else
display_alert "NOT running in container" "No special provisions for container building" "debug"
fi
# Skip verification if you are working offline
if ! $offline; then
display_alert "Installing build dependencies"
# don't prompt for apt cacher selection. this is to skip the prompt only, since we'll manage acng config later.
sudo echo "apt-cacher-ng apt-cacher-ng/tunnelenable boolean false" | sudo debconf-set-selections
# This handles the wanted list in $host_dependencies, updates apt only if needed
# $host_dependencies is produced by early_prepare_host_dependencies()
install_host_side_packages "${host_dependencies[@]}"
run_host_command_logged update-ccache-symlinks
export FINAL_HOST_DEPS="${host_dependencies[*]}"
call_extension_method "host_dependencies_ready" <<- 'HOST_DEPENDENCIES_READY'
*run after all host dependencies are installed*
At this point we can read `${FINAL_HOST_DEPS}`, but changing won't have any effect.
All the dependencies, including the default/core deps and the ones added via `${EXTRA_BUILD_DEPS}`
are installed at this point. The system clock has not yet been synced.
HOST_DEPENDENCIES_READY
install_host_dependencies "dependencies during prepare_release"
# Manage apt-cacher-ng
acng_configure_and_restart_acng
@ -128,7 +121,10 @@ prepare_host() {
# create directory structure # @TODO: this should be close to DEST, otherwise super-confusing
mkdir -p "${SRC}"/{cache,output} "${USERPATCHES_PATH}"
# @TODO: rpardini: wtf?
if [[ -n $SUDO_USER ]]; then
display_alert "ARMBIAN-NEXT UNHANDLED! SUDO_USER variable" "ARMBIAN-NEXT UNHANDLED! SUDO_USER: $SUDO_USER" "wrn"
chgrp --quiet sudo cache output "${USERPATCHES_PATH}"
# SGID bit on cache/sources breaks kernel dpkg packaging
chmod --quiet g+w,g+s output "${USERPATCHES_PATH}"
@ -136,6 +132,7 @@ prepare_host() {
find "${SRC}"/output "${USERPATCHES_PATH}" -type d ! -group sudo -exec chgrp --quiet sudo {} \;
find "${SRC}"/output "${USERPATCHES_PATH}" -type d ! -perm -g+w,g+s -exec chmod --quiet g+w,g+s {} \;
fi
# @TODO: original: mkdir -p "${DEST}"/debs-beta/extra "${DEST}"/debs/extra "${DEST}"/{config,debug,patch} "${USERPATCHES_PATH}"/overlay "${SRC}"/cache/{sources,hash,hash-beta,toolchain,utility,rootfs} "${SRC}"/.tmp
mkdir -p "${USERPATCHES_PATH}"/overlay "${SRC}"/cache/{sources,hash,hash-beta,toolchain,utility,rootfs} "${SRC}"/.tmp
@ -146,7 +143,7 @@ prepare_host() {
# enable arm binary format so that the cross-architecture chroot environment will work
if build_task_is_enabled "bootstrap"; then
modprobe -q binfmt_misc || display_alert "Failed to modprobe" "binfmt_misc" "warn"
modprobe -q binfmt_misc || display_alert "Failed to modprobe" "binfmt_misc" "warn" # @TODO avoid this if possible, is it already loaded, or built-in? then ignore
mountpoint -q /proc/sys/fs/binfmt_misc/ || mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc
if [[ "$(arch)" != "aarch64" ]]; then
test -e /proc/sys/fs/binfmt_misc/qemu-arm || update-binfmts --enable qemu-arm
@ -166,6 +163,10 @@ prepare_host() {
find "${SRC}"/patch -maxdepth 2 -type d ! -name . | sed "s%/.*patch%/$USERPATCHES_PATH%" | xargs mkdir -p
fi
# Reset owner of userpatches if so required
reset_uid_owner "${USERPATCHES_PATH}" # Fix owner of files in the final destination
# @TODO: check every possible mount point. Not only one. People might have different mounts / Docker volumes...
# check free space (basic) @TODO probably useful to refactor and implement in multiple spots.
declare -i free_space_bytes
free_space_bytes=$(findmnt --noheadings --output AVAIL --bytes --target "${SRC}" --uniq 2> /dev/null) # in bytes
@ -192,12 +193,15 @@ function early_prepare_host_dependencies() {
libusb-1.0-0-dev linux-base locales ncurses-base ncurses-term
ntpdate patchutils
pkg-config pv python3-dev python3-distutils qemu-user-static rsync swig
systemd-container u-boot-tools udev uuid-dev whiptail
u-boot-tools udev uuid-dev whiptail
zlib1g-dev busybox fdisk
# python2, including headers, mostly used by some u-boot builds (2017 et al, odroidxu4 and others).
python2 python2-dev
# systemd-container brings in systemd-nspawn, which is used by the buildpkg functionality
# systemd-container # @TODO: bring this back eventually. I don't think trying to use those inside a container is a good idea.
# non-mess below?
file ccze colorized-logs tree expect # logging utilities; expect is needed for 'unbuffer' command
unzip zip p7zip-full pigz pixz pbzip2 lzop zstd # compressors et al
@ -220,6 +224,11 @@ function early_prepare_host_dependencies() {
host_dependencies+=("apt-cacher-ng")
fi
if [[ "${REQUIREMENTS_DEFS_ONLY}" == "yes" ]]; then
display_alert "Not calling add_host_dependencies nor host_dependencies_known" "due to REQUIREMENTS_DEFS_ONLY" "debug"
return 0
fi
export EXTRA_BUILD_DEPS=""
call_extension_method "add_host_dependencies" <<- 'ADD_HOST_DEPENDENCIES'
*run before installing host dependencies*
@ -238,5 +247,33 @@ function early_prepare_host_dependencies() {
All the dependencies, including the default/core deps and the ones added via `${EXTRA_BUILD_DEPS}`
are determined at this point, but not yet installed.
HOST_DEPENDENCIES_KNOWN
}
function install_host_dependencies() {
display_alert "Installing build dependencies"
display_alert "Installing build dependencies" "$*" "debug"
# don't prompt for apt cacher selection. this is to skip the prompt only, since we'll manage acng config later.
local sudo_prefix="" && is_root_or_sudo_prefix sudo_prefix # nameref; "sudo_prefix" will be 'sudo' or ''
${sudo_prefix} echo "apt-cacher-ng apt-cacher-ng/tunnelenable boolean false" | ${sudo_prefix} debconf-set-selections
# This handles the wanted list in $host_dependencies, updates apt only if needed
# $host_dependencies is produced by early_prepare_host_dependencies()
install_host_side_packages "${host_dependencies[@]}"
run_host_command_logged update-ccache-symlinks
export FINAL_HOST_DEPS="${host_dependencies[*]}"
if [[ "${REQUIREMENTS_DEFS_ONLY}" == "yes" ]]; then
display_alert "Not calling host_dependencies_ready" "due to REQUIREMENTS_DEFS_ONLY" "debug"
return 0
fi
call_extension_method "host_dependencies_ready" <<- 'HOST_DEPENDENCIES_READY'
*run after all host dependencies are installed*
At this point we can read `${FINAL_HOST_DEPS}`, but changing won't have any effect.
All the dependencies, including the default/core deps and the ones added via `${EXTRA_BUILD_DEPS}`
are installed at this point. The system clock has not yet been synced.
HOST_DEPENDENCIES_READY
}

View File

@ -0,0 +1,30 @@
# @TODO: called by no-one, yet, or ever. This should not be done here.
function vagrant_install_vagrant() {
# Check for Vagrant
# @TODO yeah, checks for ${1} in a function. not cool. not the place to install stuff either.
if [[ "${1}" == vagrant && -z "$(command -v vagrant)" ]]; then
display_alert "Vagrant not installed." "Installing"
sudo apt-get update
sudo apt-get install -y vagrant virtualbox
fi
}
function vagrant_prepare_userpatches() {
# Create example configs if none found in userpatches
if [[ ! -f "${SRC}"/userpatches/config-vagrant.conf ]]; then
display_alert "Create example Vagrant config using template" "config-vagrant.conf" "info"
# Create Vagrant config
if [[ ! -f "${SRC}"/userpatches/config-vagrant.conf ]]; then
cp "${SRC}"/config/templates/config-vagrant.conf "${SRC}"/userpatches/config-vagrant.conf || exit 1
fi
fi
if [[ ! -f "${SRC}"/userpatches/Vagrantfile ]]; then
# Create Vagrant file
if [[ ! -f "${SRC}"/userpatches/Vagrantfile ]]; then
cp "${SRC}"/config/templates/Vagrantfile "${SRC}"/userpatches/Vagrantfile || exit 1
fi
fi
}

View File

@ -9,18 +9,37 @@ function check_loop_device() {
}
function check_loop_device_internal() {
local device=$1
local device="${1}"
display_alert "Checking look device" "${device}" "debug"
if [[ ! -b $device ]]; then
if [[ $CONTAINER_COMPAT == yes && -b /tmp/$device ]]; then
display_alert "Creating device node" "$device"
mknod -m0660 "${device}" b "0x$(stat -c '%t' "/tmp/$device")" "0x$(stat -c '%T' "/tmp/$device")"
return 1 # fail, it will be retried, and should exist on next retry.
if [[ ! -b "${device}" ]]; then
if [[ $CONTAINER_COMPAT == yes && -b "/tmp/${device}" ]]; then
display_alert "Creating device node" "${device}"
run_host_command_logged mknod -m0660 "${device}" b "0x$(stat -c '%t' "/tmp/${device}")" "0x$(stat -c '%T' "/tmp/${device}")"
if [[ ! -b "${device}" ]]; then # try again after creating node
return 1 # fail, it will be retried, and should exist on next retry.
else
display_alert "Device node created OK" "${device}" "info"
fi
else
display_alert "Device node does not exist yet" "$device" "debug"
display_alert "Device node does not exist yet" "${device}" "debug"
return 1
fi
fi
if [[ "${CHECK_LOOP_FOR_SIZE:-yes}" != "no" ]]; then
# Device exists. Make sure it's not 0-sized. Read with blockdev --getsize64 /dev/sda
local device_size
device_size=$(blockdev --getsize64 "${device}")
display_alert "Device node size" "${device}: ${device_size}" "debug"
if [[ ${device_size} -eq 0 ]]; then
run_host_command_logged ls -la "${device}"
run_host_command_logged lsblk
run_host_command_logged blkid
display_alert "Device node exists but is 0-sized" "${device}" "debug"
return 1
fi
fi
return 0
}

View File

@ -205,7 +205,7 @@ function prepare_partitions() {
LOOP=$(losetup -f) || exit_with_error "Unable to find free loop device"
display_alert "Allocated loop device" "LOOP=${LOOP}"
check_loop_device "$LOOP"
CHECK_LOOP_FOR_SIZE="no" check_loop_device "$LOOP" # initially loop is zero sized, ignore it.
run_host_command_logged losetup $LOOP ${SDCARD}.raw # @TODO: had a '-P- here, what was it?
@ -215,6 +215,9 @@ function prepare_partitions() {
display_alert "Running partprobe" "${LOOP}" "debug"
run_host_command_logged partprobe $LOOP
display_alert "Checking again after partprobe" "${LOOP}" "debug"
check_loop_device "$LOOP" # check again, now it has to have a size! otherwise wait.
# stage: create fs, mount partitions, create fstab
rm -f $SDCARD/etc/fstab
if [[ -n $rootpart ]]; then

View File

@ -118,10 +118,11 @@ create_image_from_sdcard_rootfs() {
It is the last possible chance to modify `$CARD_DEVICE`.
POST_BUILD_IMAGE
display_alert "Moving artefacts from temporary directory to its final destination" "${version}" "debug"
display_alert "Moving artefacts from temporary directory to its final destination" "${version}" "info"
[[ -n $compression_type ]] && run_host_command_logged rm -v "${DESTIMG}/${version}.img"
run_host_command_logged rsync -av --no-owner --no-group --remove-source-files "${DESTIMG}/${version}"* "${FINALDEST}"
run_host_command_logged rm -rfv --one-file-system "${DESTIMG}"
reset_uid_owner "${FINALDEST}" # Fix owner of files in the final destination
# write image to SD card
write_image_to_device "${FINALDEST}/${version}.img" "${CARD_DEVICE}"

View File

@ -37,8 +37,10 @@ function write_image_to_device() {
display_alert "Writing failed" "${image_file}" "err"
fi
fi
elif [[ $(systemd-detect-virt) == 'docker' && -n ${device} ]]; then
# display warning when we want to write sd card under Docker
display_alert "Can't write to ${device}" "Enable docker privileged mode in config-docker.conf" "wrn"
elif armbian_is_running_in_container; then
if [[ -n ${device} ]]; then
# display warning when we want to write sd card under Docker
display_alert "Can't write to ${device}" "Under Docker" "wrn"
fi
fi
}

View File

@ -10,6 +10,13 @@ function logging_init() {
if [[ "${CI}" == "true" ]]; then # ... but that is too dark for Github Actions
export tool_color="${normal_color}"
fi
if [[ "${ARMBIAN_RUNNING_IN_CONTAINER}" == "yes" ]]; then # if in container, add a cyan "whale emoji" to the left marker wrapped in dark gray brackets
local container_emoji="🐳" # 🐳 or 🐋
export left_marker="${gray_color}[${container_emoji}|${normal_color}"
elif [[ "$(uname -s)" == "Darwin" ]]; then # if on Mac, add a an apple emoji to the left marker wrapped in dark gray brackets
local mac_emoji="🍏" # 🍏 or 🍎
export left_marker="${gray_color}[${mac_emoji}|${normal_color}"
fi
}
function logging_error_show_log() {
@ -295,7 +302,7 @@ function export_ansi_logs() {
----------------------------------------------------------------------------------------------------------------
ANSI_HEADER
if [[ -n "$(command -v git)" ]]; then
if [[ -n "$(command -v git)" && -d "${SRC}/.git" ]]; then
display_alert "Gathering git info for logs" "Processing git information, please wait..." "debug"
cat <<- GIT_ANSI_HEADER > "${target_file}"
----------------------------------------------------------------------------------------------------------------
@ -388,51 +395,76 @@ function export_html_logs() {
display_alert "Built HTML log file" "${target_file}"
}
function discard_logs_tmp_dir() {
# Linux allows us to be more careful, but really, those are log files we're talking about.
if [[ "$(uname)" == "Linux" ]]; then
rm -rf --one-file-system "${LOGDIR}"
else
rm -rf "${LOGDIR}"
fi
}
# Cleanup for logging.
function trap_handler_cleanup_logging() {
[[ "x${LOGDIR}x" == "xx" ]] && return 0
[[ "x${LOGDIR}x" == "x/x" ]] && return 0
[[ ! -d "${LOGDIR}" ]] && return 0
display_alert "Cleaning up log files" "LOGDIR: '${LOGDIR}'" "debug"
# `pwd` might not even be valid anymore. Move back to ${SRC}
cd "${SRC}" || exit_with_error "cray-cray about SRC: ${SRC}"
# Just delete LOGDIR if in CONFIG_DEFS_ONLY mode.
if [[ "${CONFIG_DEFS_ONLY}" == "yes" ]]; then
display_alert "Discarding logs" "CONFIG_DEFS_ONLY=${CONFIG_DEFS_ONLY}" "debug"
rm -rf --one-file-system "${LOGDIR}"
discard_logs_tmp_dir
return 0
fi
local target_path="${DEST}/logs"
mkdir -p "${target_path}"
mkdir_recursive_and_set_uid_owner "${target_path}"
# Before writing new logfile, compress and move existing ones to archive folder. Unless running under CI.
if [[ "${CI}" != "true" ]]; then
# Before writing new logfile, compress and move existing ones to archive folder.
# - Unless running under CI.
# - Also not if signalled via SKIP_LOG_ARCHIVE=yes
if [[ "${CI:-false}" != "true" && "${SKIP_LOG_ARCHIVE:-no}" != "yes" ]]; then
declare -a existing_log_files_array
mapfile -t existing_log_files_array < <(find "${target_path}" -maxdepth 1 -type f -name "armbian-logs-*.*")
mapfile -t existing_log_files_array < <(find "${target_path}" -maxdepth 1 -type f -name "armbian-*.*")
declare one_old_logfile old_logfile_fn target_archive_path="${target_path}"/archive
for one_old_logfile in "${existing_log_files_array[@]}"; do
old_logfile_fn="$(basename "${one_old_logfile}")"
display_alert "Archiving old logfile" "${old_logfile_fn}" "debug"
mkdir -p "${target_archive_path}"
if [[ "${old_logfile_fn}" == *${ARMBIAN_BUILD_UUID}* ]]; then
display_alert "Skipping archiving of current logfile" "${old_logfile_fn}" "warn"
continue
fi
display_alert "Archiving old logfile" "${old_logfile_fn}" "warn"
mkdir_recursive_and_set_uid_owner "${target_archive_path}"
# Check if we have `zstdmt` at this stage; if not, use standard gzip
if [[ -n "$(command -v zstdmt)" ]]; then
zstdmt --quiet "${one_old_logfile}" -o "${target_archive_path}/${old_logfile_fn}.zst"
reset_uid_owner "${target_archive_path}/${old_logfile_fn}.zst"
else
# shellcheck disable=SC2002 # my cat is not useless. a bit whiny. not useless.
cat "${one_old_logfile}" | gzip > "${target_archive_path}/${old_logfile_fn}.gz"
reset_uid_owner "${target_archive_path}/${old_logfile_fn}.gz"
fi
rm -f "${one_old_logfile}"
done
else
display_alert "Not archiving old logs." "CI=${CI:-false}, SKIP_LOG_ARCHIVE=${SKIP_LOG_ARCHIVE:-no}" "debug"
fi
if [[ "${EXPORT_HTML_LOG}" == "yes" ]]; then
local target_file="${target_path}/armbian-logs-${ARMBIAN_BUILD_UUID}.html"
local target_file="${target_path}/armbian-${ARMBIAN_LOG_CLI_ID}-${ARMBIAN_BUILD_UUID}.html"
export_html_logs
reset_uid_owner "${target_file}"
fi
local target_file="${target_path}/armbian-logs-ansi-${ARMBIAN_BUILD_UUID}.txt.log"
local target_file="${target_path}/armbian-${ARMBIAN_LOG_CLI_ID}-${ARMBIAN_BUILD_UUID}.ansitxt.log"
export_ansi_logs
reset_uid_owner "${target_file}"
rm -rf --one-file-system "${LOGDIR}"
discard_logs_tmp_dir
}

View File

@ -9,16 +9,26 @@ function chroot_sdcard_apt_get_install_download_only() {
chroot_sdcard_apt_get --no-install-recommends --download-only install "$@"
}
function chroot_sdcard_apt_get_remove() {
DONT_MAINTAIN_APT_CACHE="yes" chroot_sdcard_apt_get remove "$@"
}
function chroot_sdcard_apt_get() {
acng_check_status_or_restart # make sure apt-cacher-ng is running OK.
local -a apt_params=("-${APT_OPTS:-y}")
local -a apt_params=("-y")
[[ $NO_APT_CACHER != yes ]] && apt_params+=(
-o "Acquire::http::Proxy=\"http://${APT_PROXY_ADDR:-localhost:3142}\""
-o "Acquire::http::Proxy::localhost=\"DIRECT\""
)
apt_params+=(-o "Dpkg::Use-Pty=0") # Please be quiet
if [[ "${DONT_MAINTAIN_APT_CACHE:-no}" == "yes" ]]; then
# Configure Clean-Installed to off
display_alert "Configuring APT to not clean up the cache" "APT will not clean up the cache" "debug"
apt_params+=(-o "APT::Clean-Installed=0")
fi
# Allow for clean-environment apt-get
local -a prelude_clean_env=()
if [[ "${use_clean_environment:-no}" == "yes" ]]; then
@ -26,7 +36,26 @@ function chroot_sdcard_apt_get() {
prelude_clean_env=("env" "-i")
fi
chroot_sdcard "${prelude_clean_env[@]}" DEBIAN_FRONTEND=noninteractive apt-get "${apt_params[@]}" "$@"
local use_local_apt_cache apt_cache_host_dir
local_apt_deb_cache_prepare use_local_apt_cache apt_cache_host_dir "before 'apt-get $*'" # 2 namerefs + "when"
if [[ "${use_local_apt_cache}" == "yes" ]]; then
# prepare and mount apt cache dir at /var/cache/apt/archives in the SDCARD.
local apt_cache_sdcard_dir="${SDCARD}/var/cache/apt"
run_host_command_logged mkdir -pv "${apt_cache_sdcard_dir}"
display_alert "Mounting local apt cache dir" "${apt_cache_sdcard_dir}" "debug"
run_host_command_logged mount --bind "${apt_cache_host_dir}" "${apt_cache_sdcard_dir}"
fi
local chroot_apt_result=1
chroot_sdcard "${prelude_clean_env[@]}" DEBIAN_FRONTEND=noninteractive apt-get "${apt_params[@]}" "$@" && chroot_apt_result=0
local_apt_deb_cache_prepare use_local_apt_cache apt_cache_host_dir "after 'apt-get $*'" # 2 namerefs + "when"
if [[ "${use_local_apt_cache}" == "yes" ]]; then
display_alert "Unmounting apt cache dir" "${apt_cache_sdcard_dir}" "debug"
run_host_command_logged umount "${apt_cache_sdcard_dir}"
fi
return $chroot_apt_result
}
# please, please, unify around this function.
@ -101,7 +130,7 @@ function host_apt_get_install() {
# For running apt-get stuff host-side. Not chroot!
function host_apt_get() {
local -a apt_params=("-${APT_OPTS:-y}")
local -a apt_params=("-y")
apt_params+=(-o "Dpkg::Use-Pty=0") # Please be quiet
run_host_command_logged DEBIAN_FRONTEND=noninteractive apt-get "${apt_params[@]}" "$@"
}
@ -217,12 +246,20 @@ run_on_sdcard() {
function do_with_retries() {
local retries="${1}"
shift
local sleep_seconds="${sleep_seconds:-5}"
local silent_retry="${silent_retry:-no}"
local counter=0
while [[ $counter -lt $retries ]]; do
counter=$((counter + 1))
"$@" && return 0 # execute and return 0 if success; if not, let it loop;
display_alert "Command failed, retrying in 5s" "$*" "warn"
sleep 5
if [[ "${silent_retry}" == "yes" ]]; then
: # do nothing
else
display_alert "Command failed, retrying in ${sleep_seconds}s" "$*" "warn"
fi
sleep ${sleep_seconds}
done
display_alert "Command failed ${counter} times, giving up" "$*" "warn"
return 1

View File

@ -142,9 +142,11 @@ function exit_with_error() {
# @TODO: integrate both overlayfs and the FD locking with cleanup logic
display_alert "Build terminating... wait for cleanups..." "" "err"
overlayfs_wrapper "cleanup"
# unlock loop device access in case of starvation # @TODO: hmm, say that again?
exec {FD}> /var/lock/armbian-debootstrap-losetup
flock -u "${FD}"
## This does not really make sense. wtf?
## unlock loop device access in case of starvation # @TODO: hmm, say that again?
#exec {FD}> /var/lock/armbian-debootstrap-losetup
#flock -u "${FD}"
exit 43
}

View File

@ -163,6 +163,10 @@ function main_default_build_single() {
LOG_SECTION="chroot_build_packages" do_with_logging chroot_build_packages
fi
# Reset owner of DEB_STORAGE, if needed. Might be a lot of packages there, but such is life.
# @TODO: might be needed also during 'cleanup': if some package fails, the previous package might be left owned by root.
reset_uid_owner "${DEB_STORAGE}"
# end of kernel-only, so display what was built.
if [[ $KERNEL_ONLY != yes ]]; then
display_alert "Kernel build done" "@host" "target-reached"
@ -187,7 +191,9 @@ function main_default_build_single() {
runtime=$(((end - start) / 60))
display_alert "Runtime" "$runtime min" "info"
[ "$(systemd-detect-virt)" == 'docker' ] && BUILD_CONFIG='docker'
if armbian_is_running_in_container; then
BUILD_CONFIG='docker' # @TODO: this is not true, depends on how we end up launching this.
fi
# Make it easy to repeat build by displaying build options used. Prepare array.
local -a repeat_args=("./compile.sh" "${BUILD_CONFIG}" " BRANCH=${BRANCH}")

View File

@ -6,33 +6,41 @@ apt_purge_unneeded_packages() {
chroot_sdcard_apt_get autoremove
}
# this is called:
# 1) install_deb_chroot "${DEB_STORAGE}/somethingsomething.deb" (yes, it's always ${DEB_STORAGE})
# 2) install_deb_chroot "linux-u-boot-${BOARD}-${BRANCH}" "remote" (normal invocation, install from repo)
# 3) install_deb_chroot "linux-u-boot-${BOARD}-${BRANCH}" "remote" "yes" (install from repo, then also copy the WHOLE CACHE back to DEB_STORAGE)
install_deb_chroot() {
local package=$1
local variant=$2
local transfer=$3
local name
local desc
local package="$1"
local variant="$2"
local transfer="$3"
local install_target="${package}"
local log_extra=" from repository"
local package_filename
package_filename="$(basename "${package}")"
if [[ ${variant} != remote ]]; then
# For the local case.
if [[ "${variant}" != "remote" ]]; then
log_extra=""
# @TODO: this can be sped up significantly by mounting debs readonly directly in chroot /root/debs and installing from there
# also won't require cleanup later
name="/root/"$(basename "${package}")
[[ ! -f "${SDCARD}${name}" ]] && run_host_command_logged cp -pv "${package}" "${SDCARD}${name}"
desc=""
else
name=$1
desc=" from repository"
install_target="/root/${package_filename}"
[[ ! -f "${SDCARD}${install_target}" ]] && run_host_command_logged cp -pv "${package}" "${SDCARD}${install_target}"
fi
display_alert "Installing${desc}" "${name/\/root\//}"
display_alert "Installing${log_extra}" "${package_filename}" "debinstall" # This needs its own level
# install in chroot via apt-get, not dpkg, so dependencies are also installed from repo if needed.
export if_error_detail_message="Installation of $name failed ${BOARD} ${RELEASE} ${BUILD_DESKTOP} ${LINUXFAMILY}"
chroot_sdcard_apt_get --no-install-recommends install "${name}"
export if_error_detail_message="Installation of $install_target failed ${BOARD} ${RELEASE} ${BUILD_DESKTOP} ${LINUXFAMILY}"
DONT_MAINTAIN_APT_CACHE="yes" chroot_sdcard_apt_get --no-install-recommends install "${install_target}" # don't auto-maintain apt cache when installing from packages.
# @TODO: mysterious. store installed/downloaded packages in deb storage. only used for u-boot deb. why?
[[ ${variant} == remote && ${transfer} == yes ]] && run_host_command_logged rsync -r "${SDCARD}"/var/cache/apt/archives/*.deb "${DEB_STORAGE}"/
# this is some contrived way to get the uboot.deb when installing from repo; image builder needs the deb to be able to deploy uboot later, even though it is already installed inside the chroot, it needs deb to be in host to reuse code later
if [[ ${variant} == remote && ${transfer} == yes ]]; then
display_alert "install_deb_chroot called with" "transfer=yes, copy WHOLE CACHE back to DEB_STORAGE, this is probably a bug" "warn"
run_host_command_logged rsync -r "${SDCARD}"/var/cache/apt/archives/*.deb "${DEB_STORAGE}"/
fi
# IMPORTANT! Do not use short-circuit above as last statement in a function, since it determines the result of the function.
return 0

View File

@ -105,20 +105,33 @@ function create_new_rootfs_cache() {
display_alert "Installing base system" "Stage 1/2" "info"
cd "${SDCARD}" || exit_with_error "cray-cray about SDCARD" "${SDCARD}" # this will prevent error sh: 0: getcwd() failed
local -a deboostrap_arguments=(
"--variant=minbase" # minimal base variant. go ask Debian about it.
"--include=${DEBOOTSTRAP_LIST// /,}" # from aggregation?
${PACKAGE_LIST_EXCLUDE:+ --exclude="${PACKAGE_LIST_EXCLUDE// /,}"} # exclude some
"--arch=${ARCH}" # the arch
"--components=${DEBOOTSTRAP_COMPONENTS}" # from aggregation?
"--foreign" "${RELEASE}" "${SDCARD}/" "${debootstrap_apt_mirror}" # path and mirror
)
# Small detour for local apt caching option.
local use_local_apt_cache apt_cache_host_dir
local_apt_deb_cache_prepare use_local_apt_cache apt_cache_host_dir "before debootstrap" # 2 namerefs + "when"
if [[ "${use_local_apt_cache}" == "yes" ]]; then
# Small difference for debootstrap, if compared to apt: we need to pass it the "/archives" subpath to share cache with apt.
deboostrap_arguments+=("--cache-dir=${apt_cache_host_dir}/archives") # cache .deb's used
fi
# This always last, positional arguments.
deboostrap_arguments+=("--foreign" "${RELEASE}" "${SDCARD}/" "${debootstrap_apt_mirror}") # path and mirror
run_host_command_logged debootstrap "${deboostrap_arguments[@]}" || {
exit_with_error "Debootstrap first stage failed" "${BRANCH} ${BOARD} ${RELEASE} ${DESKTOP_APPGROUPS_SELECTED} ${DESKTOP_ENVIRONMENT} ${BUILD_MINIMAL}"
}
[[ ! -f ${SDCARD}/debootstrap/debootstrap ]] && exit_with_error "Debootstrap first stage did not produce marker file"
local_apt_deb_cache_prepare use_local_apt_cache apt_cache_host_dir "after debootstrap" # 2 namerefs + "when"
deploy_qemu_binary_to_chroot "${SDCARD}" # this is cleaned-up later by post_debootstrap_tweaks()
mkdir -p "${SDCARD}/usr/share/keyrings/"
@ -142,12 +155,14 @@ function create_new_rootfs_cache() {
chmod 755 "$SDCARD/sbin/initctl"
chmod 755 "$SDCARD/sbin/start-stop-daemon"
# stage: configure language and locales
display_alert "Configuring locales" "$DEST_LANG" "info"
# stage: configure language and locales.
# this _requires_ DEST_LANG, otherwise, bomb: if it's not here _all_ locales will be generated which is very slow.
display_alert "Configuring locales" "DEST_LANG: ${DEST_LANG}" "info"
[[ "x${DEST_LANG}x" == "xx" ]] && exit_with_error "Bug: got to config locales without DEST_LANG set"
[[ -f $SDCARD/etc/locale.gen ]] && sed -i "s/^# $DEST_LANG/$DEST_LANG/" $SDCARD/etc/locale.gen
chroot_sdcard LC_ALL=C LANG=C locale-gen "$DEST_LANG"
chroot_sdcard LC_ALL=C LANG=C update-locale "LANG=$DEST_LANG" "LANGUAGE=$DEST_LANG" "LC_MESSAGES=$DEST_LANG"
[[ -f $SDCARD/etc/locale.gen ]] && sed -i "s/^# ${DEST_LANG}/${DEST_LANG}/" $SDCARD/etc/locale.gen
chroot_sdcard LC_ALL=C LANG=C locale-gen "${DEST_LANG}"
chroot_sdcard LC_ALL=C LANG=C update-locale "LANG=${DEST_LANG}" "LANGUAGE=${DEST_LANG}" "LC_MESSAGES=${DEST_LANG}"
if [[ -f $SDCARD/etc/default/console-setup ]]; then
# @TODO: Should be configurable.
@ -170,7 +185,7 @@ function create_new_rootfs_cache() {
# Add external / PPAs to apt sources; decides internally based on minimal/cli/desktop dir/file structure
add_apt_sources
# uset asset logging for this; actually log contents of the files too
# @TODO: use asset logging for this; actually log contents of the files too
run_host_command_logged ls -l "${SDCARD}/usr/share/keyrings"
run_host_command_logged ls -l "${SDCARD}/etc/apt/sources.list.d"
run_host_command_logged cat "${SDCARD}/etc/apt/sources.list"
@ -228,9 +243,15 @@ function create_new_rootfs_cache() {
PURGINGPACKAGES=$(chroot $SDCARD /bin/bash -c "dpkg -l | grep \"^rc\" | awk '{print \$2}' | tr \"\n\" \" \"")
chroot_sdcard_apt_get remove --purge $PURGINGPACKAGES
# stage: remove downloaded packages
chroot_sdcard_apt_get autoremove
chroot_sdcard_apt_get clean
# stage: remove packages that are installed, but not required anymore after other packages were installed/removed.
# don't touch the local cache.
DONT_MAINTAIN_APT_CACHE="yes" chroot_sdcard_apt_get autoremove
# Only clean if not using local cache. Otherwise it would be cleaning the cache, not the chroot.
if [[ "${USE_LOCAL_APT_DEB_CACHE}" != "yes" ]]; then
display_alert "Late Cleaning" "late: package lists and apt cache" "warn"
chroot_sdcard_apt_get clean
fi
# DEBUG: print free space
local freespace=$(LC_ALL=C df -h)

View File

@ -248,11 +248,14 @@ function install_distribution_agnostic() {
display_alert "Temporarily disabling" "initramfs-tools hook for kernel"
chroot_sdcard chmod -v -x /etc/kernel/postinst.d/initramfs-tools
display_alert "Cleaning" "package lists"
APT_OPTS="y" chroot_sdcard_apt_get clean
# Only clean if not using local cache. Otherwise it would be cleaning the cache, not the chroot.
if [[ "${USE_LOCAL_APT_DEB_CACHE}" != "yes" ]]; then
display_alert "Cleaning" "package lists and apt cache" "warn"
chroot_sdcard_apt_get clean
fi
display_alert "Updating" "apt package lists"
APT_OPTS="y" do_with_retries 3 chroot_sdcard_apt_get update
do_with_retries 3 chroot_sdcard_apt_get update
# install family packages
if [[ -n ${PACKAGE_LIST_FAMILY} ]]; then
@ -280,7 +283,7 @@ function install_distribution_agnostic() {
if [[ -n ${PACKAGE_LIST_FAMILY_REMOVE} ]]; then
_pkg_list=${PACKAGE_LIST_FAMILY_REMOVE}
display_alert "Removing PACKAGE_LIST_FAMILY_REMOVE packages" "${_pkg_list}"
chroot_sdcard_apt_get remove --auto-remove ${_pkg_list}
chroot_sdcard_apt_get_remove --auto-remove ${_pkg_list}
fi
# remove board packages. loop over the list to remove, check if they're actually installed, then remove individually.
@ -293,7 +296,7 @@ function install_distribution_agnostic() {
# shellcheck disable=SC2076 # I wanna match literally, thanks.
if [[ " ${currently_installed_packages[*]} " =~ " ${PKG_REMOVE} " ]]; then
display_alert "Removing PACKAGE_LIST_BOARD_REMOVE package" "${PKG_REMOVE}"
chroot_sdcard_apt_get remove --auto-remove "${PKG_REMOVE}"
chroot_sdcard_apt_get_remove --auto-remove "${PKG_REMOVE}"
fi
done
unset currently_installed_packages
@ -306,7 +309,7 @@ function install_distribution_agnostic() {
UBOOT_VER=$(dpkg --info "${DEB_STORAGE}/${CHOSEN_UBOOT}_${REVISION}_${ARCH}.deb" | grep Descr | awk '{print $(NF)}')
install_deb_chroot "${DEB_STORAGE}/${CHOSEN_UBOOT}_${REVISION}_${ARCH}.deb"
else
install_deb_chroot "linux-u-boot-${BOARD}-${BRANCH}" "remote" "yes"
install_deb_chroot "linux-u-boot-${BOARD}-${BRANCH}" "remote" "yes" # @TODO: rpardini: this is completely different! "remote" "yes"
UBOOT_REPO_VERSION=$(dpkg-deb -f "${SDCARD}"/var/cache/apt/archives/linux-u-boot-${BOARD}-${BRANCH}*_${ARCH}.deb Version)
fi
}
@ -336,16 +339,17 @@ function install_distribution_agnostic() {
install_deb_chroot "${DEB_STORAGE}/${CHOSEN_KERNEL/image/headers}_${REVISION}_${ARCH}.deb"
fi
else
install_deb_chroot "linux-image-${BRANCH}-${LINUXFAMILY}" "remote"
install_deb_chroot "linux-image-${BRANCH}-${LINUXFAMILY}" "remote" # @TODO: rpardini: again a different one, without "yes" this time
VER=$(dpkg-deb -f "${SDCARD}"/var/cache/apt/archives/linux-image-${BRANCH}-${LINUXFAMILY}*_${ARCH}.deb Source)
VER="${VER/-$LINUXFAMILY/}"
VER="${VER/linux-/}"
display_alert "Parsed kernel version from remote package" "${VER}" "debug"
if [[ "${ARCH}" != "amd64" && "${LINUXFAMILY}" != "media" ]]; then # amd64 does not have dtb package, see packages/armbian/builddeb:355
install_deb_chroot "linux-dtb-${BRANCH}-${LINUXFAMILY}" "remote"
install_deb_chroot "linux-dtb-${BRANCH}-${LINUXFAMILY}" "remote" # @TODO: rpardini: again a different one, without "yes" this time
fi
[[ $INSTALL_HEADERS == yes ]] && install_deb_chroot "linux-headers-${BRANCH}-${LINUXFAMILY}" "remote"
[[ $INSTALL_HEADERS == yes ]] && install_deb_chroot "linux-headers-${BRANCH}-${LINUXFAMILY}" "remote" # @TODO: rpardini: again a different one, without "yes" this time
fi
# Eh, short circuit above. Beware.
}
call_extension_method "post_install_kernel_debs" <<- 'POST_INSTALL_KERNEL_DEBS'
@ -358,7 +362,7 @@ function install_distribution_agnostic() {
if [[ "${REPOSITORY_INSTALL}" != *bsp* ]]; then
install_deb_chroot "${DEB_STORAGE}/${BSP_CLI_PACKAGE_FULLNAME}.deb"
else
install_deb_chroot "${CHOSEN_ROOTFS}" "remote"
install_deb_chroot "${CHOSEN_ROOTFS}" "remote" # @TODO: rpardini: err.... what?
fi
# install armbian-desktop

View File

@ -33,9 +33,72 @@ source "${SRC}"/lib/functions/bsp/utils-bsp.sh
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/cli/cli-entrypoint.sh
# shellcheck source=lib/functions/cli/cli-entrypoint.sh
source "${SRC}"/lib/functions/cli/cli-entrypoint.sh
### lib/functions/cli/cli-build.sh
# shellcheck source=lib/functions/cli/cli-build.sh
source "${SRC}"/lib/functions/cli/cli-build.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/cli/cli-configdump.sh
# shellcheck source=lib/functions/cli/cli-configdump.sh
source "${SRC}"/lib/functions/cli/cli-configdump.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/cli/cli-docker.sh
# shellcheck source=lib/functions/cli/cli-docker.sh
source "${SRC}"/lib/functions/cli/cli-docker.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/cli/cli-requirements.sh
# shellcheck source=lib/functions/cli/cli-requirements.sh
source "${SRC}"/lib/functions/cli/cli-requirements.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/cli/cli-undecided.sh
# shellcheck source=lib/functions/cli/cli-undecided.sh
source "${SRC}"/lib/functions/cli/cli-undecided.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/cli/cli-vagrant.sh
# shellcheck source=lib/functions/cli/cli-vagrant.sh
source "${SRC}"/lib/functions/cli/cli-vagrant.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/cli/commands.sh
# shellcheck source=lib/functions/cli/commands.sh
source "${SRC}"/lib/functions/cli/commands.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/cli/entrypoint.sh
# shellcheck source=lib/functions/cli/entrypoint.sh
source "${SRC}"/lib/functions/cli/entrypoint.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
@ -271,6 +334,15 @@ set -o errexit ## set -e : exit the script if any statement returns a non-true
# shellcheck source=lib/functions/host/basic-deps.sh
source "${SRC}"/lib/functions/host/basic-deps.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/host/docker.sh
# shellcheck source=lib/functions/host/docker.sh
source "${SRC}"/lib/functions/host/docker.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
@ -298,6 +370,15 @@ set -o errexit ## set -e : exit the script if any statement returns a non-true
# shellcheck source=lib/functions/host/prepare-host.sh
source "${SRC}"/lib/functions/host/prepare-host.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/host/vagrant.sh
# shellcheck source=lib/functions/host/vagrant.sh
source "${SRC}"/lib/functions/host/vagrant.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
@ -523,6 +604,7 @@ set -o errexit ## set -e : exit the script if any statement returns a non-true
# shellcheck source=lib/functions/rootfs/rootfs-desktop.sh
source "${SRC}"/lib/functions/rootfs/rootfs-desktop.sh
# no errors tolerated. one last time for the win!
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled