415 lines
13 KiB
Markdown
415 lines
13 KiB
Markdown
# Architecture
|
|
|
|
## Status
|
|
|
|
This repository currently contains product and implementation planning documents, not an implemented application. The architecture described here is therefore the intended architecture inferred from:
|
|
|
|
- `SPEC.md`
|
|
- `TASKS.md`
|
|
- `README.md`
|
|
- The screen mockups in `SPEC/`
|
|
|
|
Where the sources are incomplete, this document calls that out explicitly instead of inventing behavior.
|
|
|
|
## Purpose
|
|
|
|
The system is a terminal user interface (TUI) application that guides a user through authenticating against a Proxmox VE server and creating a virtual machine through a multi-step wizard.
|
|
|
|
Primary responsibilities:
|
|
|
|
- Authenticate a user against Proxmox
|
|
- Load server-side reference data needed for VM setup
|
|
- Collect VM configuration across several wizard steps
|
|
- Validate and summarize the configuration
|
|
- Translate the collected state into Proxmox API requests
|
|
- Execute VM creation and report progress, success, and failure
|
|
|
|
## High-Level Shape
|
|
|
|
The intended design is a layered Textual application with a strict separation between UI, workflow/domain state, and Proxmox integration.
|
|
|
|
```text
|
|
Textual App Shell
|
|
-> Wizard Screens
|
|
-> Reusable Widgets
|
|
-> Domain / Workflow State
|
|
-> Service Layer
|
|
-> Proxmox API
|
|
```
|
|
|
|
This structure is directly supported by `TASKS.md`, which requires separation of:
|
|
|
|
- app shell
|
|
- screens
|
|
- widgets
|
|
- models
|
|
- services
|
|
- a central state or domain module for the VM configuration workflow
|
|
|
|
## System Context
|
|
|
|
### External system
|
|
|
|
The only explicit external dependency is the Proxmox VE API.
|
|
|
|
Expected external interactions:
|
|
|
|
- authentication realm discovery
|
|
- login / authentication
|
|
- loading nodes
|
|
- loading next free VM ID
|
|
- loading resource pools
|
|
- loading existing tags
|
|
- loading storage backends
|
|
- loading available ISO images
|
|
- creating the VM
|
|
- updating VM configuration after creation
|
|
|
|
### User
|
|
|
|
The user operates the application interactively through a terminal UI. The wizard is expected to be keyboard-friendly and stateful across steps.
|
|
|
|
## Main Runtime Flow
|
|
|
|
The workflow described in `SPEC.md` and `TASKS.md` is:
|
|
|
|
1. Login
|
|
2. General VM configuration
|
|
3. OS selection
|
|
4. System configuration
|
|
5. Disk configuration
|
|
6. CPU configuration
|
|
7. Memory configuration
|
|
8. Network configuration
|
|
9. Confirmation
|
|
10. VM creation submission
|
|
11. Post-creation serial-console configuration
|
|
|
|
Each step is expected to support explicit UI states where relevant:
|
|
|
|
- default
|
|
- loading
|
|
- success
|
|
- empty
|
|
- error
|
|
|
|
That state coverage is called out repeatedly in `README.md` and `TASKS.md`, so it is a core architectural requirement, not a UI detail.
|
|
|
|
## Architectural Layers
|
|
|
|
### 1. App Shell
|
|
|
|
The app shell owns application startup and top-level navigation.
|
|
|
|
Expected responsibilities:
|
|
|
|
- start the Textual application
|
|
- manage high-level routing between login, wizard, and submission/result states
|
|
- provide shared app context to screens
|
|
- coordinate back/next/confirm navigation
|
|
|
|
The current run command placeholder in `README.md` and `TASKS.md` is `uv run python -m your_app`, so the real package/module name is still unresolved.
|
|
|
|
### 2. Screens
|
|
|
|
Each wizard step should be implemented as a dedicated screen. Based on the spec, those screens are:
|
|
|
|
- Login screen
|
|
- General screen
|
|
- OS screen
|
|
- System screen
|
|
- Disks screen
|
|
- CPU screen
|
|
- Memory screen
|
|
- Network screen
|
|
- Confirm screen
|
|
|
|
Responsibilities of screens:
|
|
|
|
- render step-specific controls and feedback
|
|
- bind widgets to workflow/domain state
|
|
- trigger service-backed loading actions through a non-UI layer
|
|
- show validation, loading, empty, and error states
|
|
|
|
Non-responsibilities:
|
|
|
|
- direct Proxmox API calls
|
|
- business rule ownership
|
|
- payload assembly for VM creation
|
|
|
|
That separation is explicitly required by the repository guidance.
|
|
|
|
### 3. Reusable Widgets
|
|
|
|
Widgets should contain presentation logic and local interaction behavior only.
|
|
|
|
Likely widget candidates inferred from the screens:
|
|
|
|
- step navigation/footer
|
|
- form rows and field groups
|
|
- async loading/error message blocks
|
|
- tag editor
|
|
- disk list editor
|
|
- summary panels
|
|
|
|
The repo guidance says to keep business logic out of widgets where possible, so widgets should consume already-shaped state instead of deriving backend rules themselves.
|
|
|
|
### 4. Domain Model / Workflow State
|
|
|
|
The domain layer is the center of the application. `TASKS.md` explicitly asks for a central state or domain module for the VM configuration workflow.
|
|
|
|
This layer should model:
|
|
|
|
- authentication state
|
|
- selected realm and authenticated session context
|
|
- VM configuration collected across steps
|
|
- per-step validation results
|
|
- derived defaults
|
|
- submission status
|
|
|
|
Core sub-models implied by the spec:
|
|
|
|
- `AuthenticationConfig`
|
|
- `GeneralConfig`
|
|
- `OsConfig`
|
|
- `SystemConfig`
|
|
- `DiskConfig` plus a disk collection
|
|
- `CpuConfig`
|
|
- `MemoryConfig`
|
|
- `NetworkConfig`
|
|
- `VmConfig` as the aggregate root
|
|
|
|
This layer should also own workflow-specific rules such as:
|
|
|
|
- default VM ID is the next free ID above 100
|
|
- default OS type/version
|
|
- default machine, BIOS, SCSI controller, CPU, memory, bridge, and other screen defaults
|
|
- derived memory defaults
|
|
- incremental disk device naming such as `scsi0`, `scsi1`, ...
|
|
- selecting the latest matching NixOS minimal ISO when available
|
|
|
|
### 5. Service Layer
|
|
|
|
The service layer isolates Proxmox integration and gives the UI a testable interface. This is explicitly required in Task 1 and repeated in later tasks.
|
|
|
|
Expected service responsibilities:
|
|
|
|
- define an interface or protocol used by the UI/domain layers
|
|
- encapsulate Proxmox HTTP/API interaction
|
|
- map Proxmox responses into application-friendly data structures
|
|
- expose task-oriented methods rather than raw API calls where possible
|
|
- surface structured errors
|
|
|
|
Likely service capabilities:
|
|
|
|
- `load_realms()`
|
|
- `login()`
|
|
- `load_nodes()`
|
|
- `load_next_vm_id()`
|
|
- `load_pools()`
|
|
- `load_tags()`
|
|
- `load_storages()`
|
|
- `load_isos(storage)`
|
|
- `create_vm(config)`
|
|
- `configure_vm_serial_console(node, vmid)`
|
|
|
|
The service layer should also contain or delegate to a request/payload builder that converts `VmConfig` into the final Proxmox API request shape.
|
|
It also needs orchestration logic for the post-create step so partial-success cases are represented explicitly.
|
|
|
|
### 6. Proxmox API Adapter
|
|
|
|
Below the service layer, the application will need a concrete Proxmox adapter/client.
|
|
|
|
Concerns at this level:
|
|
|
|
- authentication/session handling
|
|
- request execution
|
|
- response parsing
|
|
- API-specific error translation
|
|
|
|
This layer should remain narrow and infrastructure-focused. It should not know about Textual or screen behavior.
|
|
|
|
## Data Flow
|
|
|
|
### Reference data loading
|
|
|
|
Several screens depend on live server data.
|
|
|
|
Examples:
|
|
|
|
- login screen loads authentication realms
|
|
- general screen loads nodes, next VM ID, pools, and tags
|
|
- OS screen loads storages and ISO images
|
|
- creation step submits the final VM request
|
|
|
|
Expected data flow:
|
|
|
|
```text
|
|
Screen action
|
|
-> domain/controller update
|
|
-> service call
|
|
-> Proxmox API
|
|
-> mapped result or structured error
|
|
-> state update
|
|
-> UI rerender
|
|
```
|
|
|
|
### Submission flow
|
|
|
|
The final submission path should be:
|
|
|
|
```text
|
|
Collected per-step config
|
|
-> aggregate VmConfig
|
|
-> validation
|
|
-> Proxmox request payload builder
|
|
-> create VM API call
|
|
-> update VM config with serial0/vga
|
|
-> success/failure state shown in UI
|
|
```
|
|
|
|
Because the serial console is configured after creation, the application should treat submission as a short request sequence instead of a single atomic API call.
|
|
|
|
## State Management
|
|
|
|
The documents strongly suggest a single workflow state instead of screen-local business state.
|
|
|
|
Why this matters:
|
|
|
|
- the confirmation screen needs the full configuration
|
|
- back/next navigation should preserve user input
|
|
- defaults and validation span multiple steps
|
|
- submission requires one aggregate payload
|
|
|
|
Recommended state boundaries inferred from the requirements:
|
|
|
|
- screen-local transient UI state: focus, open dialog, temporary edit row
|
|
- workflow state: all persisted user choices and loaded reference data
|
|
- service state: request progress, responses, and errors
|
|
|
|
## Validation Strategy
|
|
|
|
Validation should live in the domain/workflow layer, not in widgets.
|
|
|
|
Validation categories implied by the spec:
|
|
|
|
- required fields such as credentials, VM name, node, and other mandatory Proxmox inputs
|
|
- numeric constraints for VM ID, disk size, CPU, memory, VLAN, MTU, rate limit, and multiqueue
|
|
- conditional rules, for example ISO selection only when media type is ISO
|
|
- cross-field rules, such as minimum memory defaults and disk device uniqueness
|
|
|
|
The confirmation screen is explicitly responsible for showing validation issues or missing required inputs before submission.
|
|
|
|
## Error Handling Model
|
|
|
|
Error handling is a first-class architectural concern in the available documents.
|
|
|
|
The system should distinguish at least:
|
|
|
|
- authentication failures
|
|
- reference data loading failures
|
|
- empty-result states that are valid, such as no pools or no ISOs
|
|
- validation failures before submission
|
|
- VM creation API failures
|
|
- post-creation serial-console configuration failures after the VM already exists
|
|
|
|
Errors should be represented in a structured way so screens can render meaningful messages without parsing raw exceptions.
|
|
|
|
## Testing Architecture
|
|
|
|
The repository guidance defines the testing strategy clearly.
|
|
|
|
### Unit tests
|
|
|
|
Target:
|
|
|
|
- domain models
|
|
- default and derived-value logic
|
|
- validation
|
|
- payload building
|
|
- service behavior with fakes/mocks
|
|
|
|
### Textual interaction tests
|
|
|
|
Target:
|
|
|
|
- screen flows using `run_test()` and `Pilot`
|
|
- navigation
|
|
- user input handling
|
|
- async loading and error transitions
|
|
- submission success/failure behavior
|
|
|
|
### Snapshot tests
|
|
|
|
Target:
|
|
|
|
- default states
|
|
- loading states
|
|
- empty states
|
|
- error states
|
|
- key visual summary/submission states
|
|
|
|
This testing strategy reinforces the separation between UI and business logic: business rules should be testable without rendering the Textual UI.
|
|
The create-then-configure request sequence is especially important to cover in service tests.
|
|
|
|
## Suggested Repository Structure
|
|
|
|
`TASKS.md` does not prescribe exact paths, but it does require a separation of concerns. A structure consistent with the current requirements would be:
|
|
|
|
```text
|
|
your_app/
|
|
__main__.py
|
|
app.py
|
|
screens/
|
|
widgets/
|
|
models/
|
|
services/
|
|
domain/
|
|
testing/
|
|
tests/
|
|
unit/
|
|
integration/
|
|
snapshots/
|
|
```
|
|
|
|
This is illustrative, not authoritative. The final module name and exact layout remain open.
|
|
|
|
## Important Defaults and Rules From the Spec
|
|
|
|
These defaults are central enough to architecture because they belong in domain/service logic rather than ad hoc widget code.
|
|
|
|
- Authentication realm defaults to PAM
|
|
- VM ID defaults to next free ID above 100
|
|
- General screen defaults: HA enabled, start at boot disabled
|
|
- OS screen defaults: storage `cephfs`, latest matching NixOS minimal ISO when available, guest type Linux, guest version `6.x - 2.6 Kernel`
|
|
- System screen defaults: machine `q35`, BIOS `OVMF (UEFI)`, EFI disk enabled, EFI storage `ceph-pool`, SCSI controller `VirtIO SCSI single`, Qemu Agent enabled
|
|
- Disk defaults: SCSI bus, incrementing device numbers, storage `ceph-pool`, size 32 GiB, format RAW, IO thread enabled, SSD emulation enabled, backup enabled, async IO `io_uring`
|
|
- CPU defaults: 2 cores, 1 socket, type `host`
|
|
- Memory defaults: 2048 MiB, min memory equals memory, ballooning enabled, KSM enabled
|
|
- Network defaults: bridge `vmbr9`, model `virtio`, firewall enabled
|
|
|
|
## Open Questions
|
|
|
|
The available resources leave several architectural details unresolved:
|
|
|
|
- What concrete Python package/module name should replace `your_app`?
|
|
- Which Proxmox authentication mechanism should be used under the hood: ticket/cookie, API token, or both?
|
|
- How should session persistence work across screens and retries?
|
|
- Does the app target a single Proxmox node/cluster endpoint or support multiple saved endpoints?
|
|
- How should physical disc drive selection work in a terminal UI, given it is listed as a valid OS media option but not described further?
|
|
- What exact validation rules are required for optional numeric fields such as startup delay, shutdown timeout, MTU, rate limit, and multiqueue?
|
|
- What is the expected behavior when API-provided defaults such as `cephfs`, `ceph-pool`, or `vmbr9` do not exist?
|
|
- Does the app need to document guest-side operating system changes required for a working serial login, or is the scope limited to Proxmox-side serial-console configuration only?
|
|
|
|
## Architectural Summary
|
|
|
|
The intended architecture is a Textual wizard application with:
|
|
|
|
- a thin app shell for navigation
|
|
- per-step screens for presentation
|
|
- reusable widgets with minimal business logic
|
|
- a central workflow/domain state model
|
|
- a service layer that isolates Proxmox integration
|
|
- strong test coverage across unit, interaction, and snapshot levels
|
|
|
|
This design matches the current repository guidance and is the clearest path to implementing the spec without coupling Textual UI code directly to Proxmox API behavior.
|