Setting Up Labgrid
The goal is simple: connect your boards to one Labgrid exporter host, control power with a TinyControl tcPDU, access the serial console over UART, and bootstrap images through OpenOCD.
I use a real setup with three boards called board1, board2, and board3. The examples below can be copied directly and then adjusted for your own board names, IP addresses, and OpenOCD config files.
1. Understand the Setup
Labgrid is easier to understand when you split it into four parts:
coordinator: keeps the global state of the labexporter: runs on the machine physically connected to hardwareplace: a logical test slot such asboard1client: the command line or test job that acquires a place and uses it
In this setup:
- the exporter host has the USB UART adapters and USB JTAG debuggers connected
- the DUT power inputs are connected to tcPDU outlets
- the coordinator tracks which resources belong to which place
- the client acquires a place before using console, power, or bootstrap operations
2. Make the Hardware Connections
For each board, connect the hardware in the same pattern:
- Connect the board power input to one tcPDU outlet.
- Connect the board UART to the exporter host through a USB UART adapter.
- Connect the board JTAG or debug connector to the exporter host through a USB debugger.
- If you use extra control lines such as GPIO-based enables or resets, connect those to the exporter host as well.
For the real example in this article:
board1uses tcPDU outlet1board2uses tcPDU outlet2board3uses tcPDU outlet7- all boards are exported from the same Labgrid host in Munich
The most important rule is this: use stable identifiers for USB devices. Do not depend on /dev/ttyUSB0 or on USB enumeration order.
3. Prepare the tcPDU
TinyControl tcPDU devices usually start with a default static IP address of 192.168.1.130.
If you need first access from a Linux host, temporarily add an address in the same subnet:
sudo ip addr add 192.168.1.10/24 dev eth0
ping 192.168.1.130
Then open the web interface and move the tcPDU to your real lab subnet.
After that, test API access:
curl -u admin:admin \
"http://192.168.1.130/api/v1/read/status/?outValues&varValues&powerValues&boardValues"
In the final Labgrid configuration below, the tcPDU is reachable at:
http://admin:admin@10.44.3.94
If this is a production setup, change the default credentials before regular use.
4. Identify the USB Devices
Before writing exporter.yaml, collect stable IDs for every UART and debugger.
For UART adapters:
ls -l /dev/serial/by-id
For USB debuggers and other USB devices:
lsusb
udevadm info --query=property --name=/dev/ttyUSB0
Useful fields are:
ID_SERIALfor UART adaptersID_PATHfor the physical USB pathID_VENDOR_IDandID_MODEL_IDfor debugger matching
These values are what keep your Labgrid setup stable across reboots.
5. Write the Exporter Configuration
The exporter configuration describes the physical resources connected to the exporter host. In this setup, one group is used per board.
Example ~/.config/labgrid/exporter.yaml:
board1:
location: "Munich"
uart:
cls: USBSerialPort
match:
"@ID_SERIAL": "Silicon_Labs_CP2102N_USB_to_UART_Bridge_Controller_1e574c842d61ed1193564f8009472825"
debugger:
cls: USBDebugger
match:
ID_PATH: "pci-0000:80:14.0-usb-0:4"
ID_VENDOR_ID: "064b"
ID_MODEL_ID: "2504"
power:
cls: NetworkPowerPort
model: tinycontrol_tcpdu
host: "http://admin:admin@10.44.3.94"
index: 1
board2:
location: "Munich"
uart:
cls: USBSerialPort
match:
"@ID_SERIAL": "Silicon_Labs_CP2102N_USB_to_UART_Bridge_Controller_8624ff6b1261ed11a8d3518009472825"
debugger:
cls: USBDebugger
match:
ID_PATH: "pci-0000:80:14.0-usb-0:6.2"
ID_VENDOR_ID: "064b"
ID_MODEL_ID: "2504"
power:
cls: NetworkPowerPort
model: tinycontrol_tcpdu
host: "http://admin:admin@10.44.3.94"
index: 2
board3:
location: "Munich"
uart:
cls: USBSerialPort
match:
"@ID_SERIAL": "FTDI_FT232R_USB_UART_A503Y12Y"
debugger:
cls: USBDebugger
match:
ID_PATH: "pci-0000:80:14.0-usb-0:6.4"
ID_VENDOR_ID: "064b"
ID_MODEL_ID: "0617"
power:
cls: NetworkPowerPort
model: tinycontrol_tcpdu
host: "http://admin:admin@10.44.3.94"
index: 7
usb-devices:
location: "Munich"
copilot-power-gpio:
cls: SysfsGPIO
index: 1339
What this file means
- each top-level key such as
board1is one exported resource group uartexports the serial console pathdebuggerexports the JTAG adapter for OpenOCDpowerexports one tcPDU outlet throughNetworkPowerPortusb-devicesis a separate group for shared helper devices such as GPIO lines
This is the most important file in the whole setup. If the matching here is wrong, nothing above it will be reliable.
6. Start the Coordinator and Exporter
First make sure the client and exporter know where the coordinator is:
printenv | grep LG_COORDINATOR
grep -r LG_COORDINATOR ~/.config
Then start or check the exporter service:
sudo systemctl --user -M labgrid@.host status labgrid-exporter.service
sudo systemctl --user -M labgrid@.host start labgrid-exporter.service
Now verify that the coordinator can see the exported resources:
labgrid-client -v resources
labgrid-client who
At this point, the resources exist, but they are not yet assigned to user-facing places.
7. Create Places and Match Resources
A place is the object users and CI jobs work with. Usually you create one place per board.
Create the places:
labgrid-client -p board1 create
labgrid-client -p board2 create
labgrid-client -p board3 create
Use labgrid-client -v resources to confirm the exported group names, then match those groups into the places:
labgrid-client -p board1 add-match '*/board1/*'
labgrid-client -p board2 add-match '*/board2/*'
labgrid-client -p board3 add-match '*/board3/*'
If you want the shared GPIO group available in a place, add a second match:
labgrid-client -p board1 add-match '*/usb-devices/*'
Check the result:
labgrid-client places
labgrid-client -p board1 show
In many setups, places.yaml and resources.yaml are created or updated automatically by Labgrid. You normally edit exporter.yaml yourself, and Labgrid persists the discovered state in the other files.
8. Acquire a Place Before Using It
The normal workflow is:
- acquire the place
- use console, power, or bootstrap
- release the place
Example:
labgrid-client -p board1 acquire
labgrid-client -p board1 power off
labgrid-client -p board1 power on
labgrid-client -p board1 console
labgrid-client -p board1 release
This acquisition step matters because it prevents two users or jobs from driving the same hardware at the same time.
9. Create a Client Environment for a Remote Place
For interactive work or automated tests, create a small environment file that points to one place. You may call it remote.yaml or env.yaml. The name is not important; the content is.
Example remote-board1.yaml:
targets:
main:
resources:
RemotePlace:
name: board1
drivers:
SerialDriver: {}
NetworkPowerDriver:
delay: 2.0
OpenOCDDriver:
search: "/usr/share/openocd/scripts"
interface_config: "ftdi/your-adapter.cfg"
board_config: "your-board.cfg"
This file tells the client:
- use the already defined Labgrid place called
board1 - bind a serial driver to the remote UART resource
- bind a power driver to the remote tcPDU outlet
- bind OpenOCD to the remote debugger
If you already keep both env.yaml and remote.yaml, you can split the same data however you prefer. The key point is that the client environment must contain RemotePlace and any drivers you want to use.
10. Understand How OpenOCD Works in Labgrid
Labgrid does not expose OpenOCD as a separate labgrid-client openocd command.
The usual path is:
labgrid-client -c remote-board1.yaml bootstrap <file>
bootstrap works because OpenOCDDriver implements Labgrid’s BootstrapProtocol. When the place is acquired, Labgrid uses the debugger resource from that place and starts OpenOCD with the matching USB path.
Minimal usage:
labgrid-client -c remote-board1.yaml acquire
labgrid-client -c remote-board1.yaml bootstrap build/boot.bin
labgrid-client -c remote-board1.yaml release
11. Configure OpenOCD for Your Board
The OpenOCDDriver accepts a few important arguments:
search: location of the OpenOCD scripts directoryconfig: extra local config file or filesinterface_config: adapter config fromopenocd/scripts/interface/board_config: board config fromopenocd/scripts/board/load_commands: custom commands that replace the default bootstrap sequence
Typical example:
targets:
main:
resources:
RemotePlace:
name: board1
drivers:
OpenOCDDriver:
search: "/usr/share/openocd/scripts"
interface_config: "ftdi/your-adapter.cfg"
board_config: "your-board.cfg"
Then run:
labgrid-client -c remote-board1.yaml acquire
labgrid-client -c remote-board1.yaml bootstrap build/boot.bin
labgrid-client -c remote-board1.yaml release
12. Override the OpenOCD Load Commands When Needed
By default, OpenOCDDriver builds a sequence like this:
init
bootstrap {filename}
shutdown
That is fine if your OpenOCD setup supports the bootstrap command for the file you pass in.
If your board flow needs something else, define load_commands.
For example, for an SVF file:
targets:
main:
resources:
RemotePlace:
name: board1
drivers:
OpenOCDDriver:
search: "/usr/share/openocd/scripts"
interface_config: "ftdi/your-adapter.cfg"
board_config: "your-board.cfg"
load_commands:
- "init"
- "svf -quiet {filename}"
- "shutdown"
Then use the same client command:
labgrid-client -c remote-board1.yaml acquire
labgrid-client -c remote-board1.yaml bootstrap build/boot.svf
labgrid-client -c remote-board1.yaml release
This is the clean way to adapt OpenOCD per board. You do not need to invent a separate custom hook for every device.
13. A Practical Bring-Up Sequence
When I bring up a new board in Labgrid, I usually follow this order:
- Confirm tcPDU API access.
- Confirm UART and debugger USB IDs on the exporter host.
- Write
exporter.yaml. - Start the exporter and verify resources appear.
- Create a place and add the correct matches.
- Acquire the place.
- Test
power offandpower on. - Test
console. - Add
OpenOCDDriverto the client environment. - Test
bootstrapwith a known-good file.
This order keeps failure analysis simple. If bootstrap fails, you already know power and console work.
14. Useful Commands to Keep Nearby
labgrid-client resources
labgrid-client places
labgrid-client who
labgrid-client -p board1 show
labgrid-client -p board1 acquire
labgrid-client -p board1 power cycle
labgrid-client -p board1 console
labgrid-client -p board1 release
labgrid-client -c remote-board1.yaml bootstrap build/boot.bin
15. Final Notes
The main design idea is straightforward:
exporter.yamldescribes real hardware- places group those resources into usable test slots
- the client acquires a place
- OpenOCD is used through
bootstrap, not through a separate OpenOCD subcommand
Once this structure is in place, adding more boards is easy. You repeat the same pattern: one board group in exporter.yaml, one place in the coordinator, one remote environment file for users or CI.