Create foundation for Labgrid-based OS tests (#2812)

* Create foundation for Labgrid-based OS tests

Add foundation for Labgrid-based tests of OS builds. Currently uses just
the QEMU driver, which starts a virtual machine with pristine OS, and
generates few log reports which are saved as build artifacts.

Workflow is currently triggered either manually by specifying an OS
version, or by OS build job, which now saves an artifact of the OVA
image. This allows for some modularity. If we eventually add the
possibility to run builds on PRs, we could also add the workflow_call
trigger and turn the workflow into a reusable one.

TBD (in future PRs): some meaningful tests and possibility to test on
real hardware (either local or distributed).

* Apply suggestions from @agners

Co-authored-by: Stefan Agner <stefan@agner.ch>

* Wrap test command in a script, create venv for local tests

* Make shellcheck happy

---------

Co-authored-by: Stefan Agner <stefan@agner.ch>
This commit is contained in:
Jan Čermák 2023-10-17 18:23:29 +02:00 committed by GitHub
parent 56ccbf4b9e
commit 3e36628c09
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 283 additions and 1 deletions

View File

@ -232,6 +232,14 @@ jobs:
path: /mnt/cache/cc
key: haos-cc-${{ matrix.board.id }}-${{ github.run_id }}
- name: Upload ova image to artifacts for test job
uses: actions/upload-artifact@v3
if: ${{ matrix.board.id == 'ova' }}
with:
name: ova-image
path: |
output/images/haos_ova*.qcow2.xz
bump_version:
name: Bump dev channel version
if: ${{ github.repository == 'home-assistant/operating-system' }}

92
.github/workflows/test.yaml vendored Normal file
View File

@ -0,0 +1,92 @@
name: Test HAOS image
run-name: "Test HAOS ${{ inputs.version || format('(OS build #{0})', github.event.workflow_run.run_number) }}"
on:
workflow_dispatch:
inputs:
version:
description: Version of HAOS to test
required: true
type: string
workflow_run:
workflows: ["OS build"] # must be in sync with build workflow `name`
types:
- completed
jobs:
test:
if: ${{ github.event_name != 'workflow_run' || github.event.workflow_run.conclusion == 'success' }}
env:
NO_KVM: 1
name: Test in QEMU
runs-on: ubuntu-22.04
defaults:
run:
working-directory: ./tests
steps:
- name: Install system dependencies
run: |
sudo apt update
sudo apt install -y qemu-system-x86 ovmf
- name: Checkout source
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: 3.12
- name: Install Python requirements
run:
pip install -r requirements.txt
- name: Download HAOS image
if: ${{ github.event_name == 'workflow_dispatch' }}
run: |
curl -sfL -o haos.qcow2.xz https://os-artifacts.home-assistant.io/${{github.event.inputs.version}}/haos_ova-${{github.event.inputs.version}}.qcow2.xz
- name: Get OS image artifact
if: ${{ github.event_name == 'workflow_run' }}
uses: dawidd6/action-download-artifact@v2
with:
workflow: build.yaml
workflow_conclusion: success
name: ova-image
- name: Extract OS image
run: |
unxz haos.qcow2.xz
- name: Run tests
run: |
./run_tests.sh
- name: Archive logs
uses: actions/upload-artifact@v3
if: always()
with:
name: logs
path: |
lg_logs/*
- name: Archive JUnit reports
uses: actions/upload-artifact@v3
if: always()
with:
name: junit_reports
path: |
junit_reports/*.xml
- name: Publish test report
uses: mikepenz/action-junit-report@v4
if: always()
with:
report_paths: 'junit_reports/*.xml'
annotate_only: true
detailed_summary: true

2
.gitignore vendored
View File

@ -9,4 +9,4 @@ output*/
*.pem
# vscode generated files
.vscode*
.vscode*

14
tests/.gitignore vendored Normal file
View File

@ -0,0 +1,14 @@
# QEMU images
*.qcow2
# Generated logs
/junit_reports
/lg_logs
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# Virtualenv
venv

21
tests/conftest.py Normal file
View File

@ -0,0 +1,21 @@
import os
import pytest
@pytest.hookimpl
def pytest_runtest_setup(item):
log_dir = item.config.option.lg_log
if not log_dir:
return
logging_plugin = item.config.pluginmanager.get_plugin("logging-plugin")
logging_plugin.set_log_path(os.path.join(log_dir, f"{item.name}.log"))
@pytest.fixture
def shell_command(target, strategy):
strategy.transition("shell")
shell = target.get_driver("ShellDriver")
return shell

31
tests/qemu-strategy.yaml Normal file
View File

@ -0,0 +1,31 @@
targets:
main:
resources: []
drivers:
- QEMUDriver:
qemu_bin: qemu-x86_64
machine: pc
cpu: qemu64
memory: 1G
extra_args: "-snapshot -accel kvm"
nic: user,model=virtio-net-pci
disk: disk-image
bios: bios
- ShellDriver:
login_prompt: 'homeassistant login: '
username: 'root'
prompt: '# '
login_timeout: 300
- QEMUShellStrategy: {}
tools:
qemu-x86_64: /usr/bin/qemu-system-x86_64
images:
disk-image: ./haos.qcow2
bios: /usr/share/ovmf/OVMF.fd
imports:
- qemu_shell_strategy.py

View File

@ -0,0 +1,53 @@
import enum
import os
import attr
from labgrid import target_factory, step
from labgrid.strategy import Strategy, StrategyError
class Status(enum.Enum):
unknown = 0
off = 1
shell = 2
@target_factory.reg_driver
@attr.s(eq=False)
class QEMUShellStrategy(Strategy):
"""Strategy for starting a QEMU VM and running shell commands within it."""
bindings = {
"qemu": "QEMUDriver",
"shell": "ShellDriver",
}
status = attr.ib(default=Status.unknown)
def __attrs_post_init__(self):
super().__attrs_post_init__()
if "-accel kvm" in self.qemu.extra_args and os.environ.get("NO_KVM"):
self.qemu.extra_args = self.qemu.extra_args.replace(
"-accel kvm", ""
).strip()
@step(args=["status"])
def transition(self, status, *, step): # pylint: disable=redefined-outer-name
if not isinstance(status, Status):
status = Status[status]
if status == Status.unknown:
raise StrategyError(f"can not transition to {status}")
elif status == self.status:
step.skip("nothing to do")
return # nothing to do
elif status == Status.off:
self.target.activate(self.qemu)
self.qemu.off()
elif status == Status.shell:
self.target.activate(self.qemu)
self.qemu.on()
self.target.activate(self.shell)
else:
raise StrategyError(f"no transition found from {self.status} to {status}")
self.status = status

1
tests/requirements.txt Normal file
View File

@ -0,0 +1 @@
labgrid==23.0.3

14
tests/run_tests.sh Executable file
View File

@ -0,0 +1,14 @@
#!/bin/bash
set -e
if [ -z "$GITHUB_ACTIONS" ] && [ -z "$VIRTUAL_ENV" ]; then
# Environment should be set up in separate GHA steps - which can also
# handle caching of the dependecies, etc.
python3 -m venv venv
# shellcheck disable=SC1091
source venv/bin/activate
pip3 install -r requirements.txt
fi
pytest --lg-env qemu-strategy.yaml --lg-log=lg_logs --junitxml=junit_reports/smoke_test.xml smoke_test

View File

@ -0,0 +1,48 @@
import logging
from time import sleep
_LOGGER = logging.getLogger(__name__)
def test_init(shell_command):
def check_container_running(container_name):
out = shell_command.run_check(
f"docker container inspect -f '{{{{.State.Status}}}}' {container_name} || true"
)
return "running" in out
# wait for important containers first
for _ in range(20):
if check_container_running("homeassistant") and check_container_running("hassio_supervisor"):
break
sleep(5)
# wait for system ready
for _ in range(20):
output = "\n".join(shell_command.run_check("ha os info || true"))
if "System is not ready" not in output:
break
sleep(5)
output = shell_command.run_check("ha os info")
_LOGGER.info("%s", "\n".join(output))
def test_dmesg(shell_command):
output = shell_command.run_check("dmesg")
_LOGGER.info("%s", "\n".join(output))
def test_supervisor_logs(shell_command):
output = shell_command.run_check("ha su logs")
_LOGGER.info("%s", "\n".join(output))
def test_systemctl_status(shell_command):
output = shell_command.run_check(
"systemctl --no-pager -l status -a || true", timeout=90
)
_LOGGER.info("%s", "\n".join(output))