Skip to content

Test Jigs

Published On:
Mar 8, 2022
Last Updated:
Feb 21, 2025

Test jigs are production-line devices use to both program and test PCBs. This page focuses on the design of test jigs used to test assembled PCBs (PCBAs) that have not yet been fitted into the product (or if they have, the PCBAs surface is still available for pogo pin connections).

Pogo Pins

Most test jigs use pogo pings (spring loaded connectors) to make electrical contact in many places with the PCB during testing. This saves the need for plugging in traditional connectors, which are slower, more expensive (as the PCB needs a mating connector) and take up more space on the PCB.

Small surface mount pads can be added to the PCB where you want the pogo pins to make contact. For high current applications, you can either use:

  • More pogo pins in parallel
  • Larger pogo pins
  • Traditional wire-to-board connectors (although this slows down production, it might be a must if pogo pins are just not sufficient).

Pogo pin tips are available in a range of shapes, including rounded, single-point, and multi-point. I have seen issues with rounded pogo pin tips and bad electrical contact with the PCB. I assume it was either due to residual flux on the PCB or oxidation of the tip or PCB pad, combined with the rounded tip, meaning poor electrical contact was made. For this reason I recommend using the pointed style tips (which I’m assuming can “pierce” through the flux or oxidation) and ones with a strong spring force.

Another point worth considering when using pogo pins is that you will not be testing the connectors on the PCBA. In most cases this is not a big issue since connectors are typically not parts which are faulty, and testing the connectors means you can’t just rely on pogo pins and need to plug into these connectors, drastically slowing down the testing process.

Software

You typically need to control the test jig to perform the testing. While you could use a microcontroller to do this, it is generally a better idea to use a operating system such as Windows or Linux. This allows you to easily develop the software in a language such as Python, connect to external databases to records the results, and use a variety of tools to help you debug and test the software. If parts of the testing required real-time control that a general purpose OS is not capable of, you can complement the OS with a microcontroller to do the real time aspects.

Python is a great language for this as it’s quick to write, has a large library of tools for interfacing with embedded devices (e.g. serial ports, programmers, I/O boards, e.t.c.) and is easy to debug.

You might be tempted to use a unit test library to structure the software. However, unit test software is designed to run small tests that are independent and isolated from each other. This does not fit end-of-production line testing, as many of the tests require strict ordering and depend on other tests passing. For example, you can’t really test much prior to programming the device. Then configuration and other communication may have to occur before the main tests.

Test Functions

Write each test as it’s own function. This allows you to write a generic def run_test(test_function) function that can perform common tasks prior and just after running each test step (such as checking whether the test passed or failed, logging the result, and measuring the time taken to run the test). This also allows you to restructure the ordering of the tests easily or make different combinations of tests steps into different complete test sequences.

You may be tempted to gather the test functions into a List and then loop through them to run, as shown below:

list_of_tests [
test_program_device,
test_connect_via_serial,
test_configure_device,
test_check_temperature,
test_write_serial_number,
test_save_results_to_database,
]
# Then....
for test in list_of_tests:
test()

I don’t recommend this approach, as it makes it very difficult to add conditional logic to the running of the tests. For example, you may only want to run the test_configure_device test if you detect during the connection test that it is not already configured. Instead:

def run_tests():
run_test(test_program_device)
run_test(test_connect_via_serial)
run_test(test_configure_device)
run_test(test_check_temperature)
run_test(test_write_serial_number)
run_test(test_save_results_to_database)

Log Everything

The number of failure modes when doing production line testing can be quite high, and the modes themselves hard to predict. For this reason, it is important to log everything. This gives the tester the best ability to diagnose, categorise and fix the problem (the fixing may occur later by someone else, or not at all if the cost to repair is too high compared to the cost of the device).

If writing the software in Python, you can use the built-in logging library (import logging) to log messages. I like to write a helper function that configures each logger in a standard way. This example below uses the 3rd party colorlog library to add coloured output to the logs (if the output is a “tty like” terminal, not a file).

logging_helper.py
import logging
import colorlog
def get_logger(name: str) -> logging.Logger:
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
# Create a colored stream handler
handler = colorlog.StreamHandler()
# Configure the formatter with colors
# Orange for warnings, red for errors
formatter = colorlog.ColoredFormatter(
'%(log_color)s%(asctime)s - %(levelname)s - %(name)s:%(lineno)d - %(message)s',
log_colors={
'DEBUG': 'cyan',
'INFO': 'white',
'WARNING': 'yellow', # Yellow/orange for warnings
'ERROR': 'red', # Red for errors
'CRITICAL': 'red,bg_white',
}
)
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger

And then in your other files you would use get_logger() like this:

main.py
from logging_helper import get_logger
logger = get_logger(Path(__file__).name)
def test_program_device():
logger.info("Programming device...")
try:
# Call programming function
test_result = programming_function()
logger.info("Device programmed successfully")
except Exception as e:
logger.error(f"Device programming failed: {e}")
return False
return True

Save Test Results to a Database

It is a good idea to save the test results somewhere. It can be a simple as writing to a local CSV/json file, or saving to a cloud-hosted database. Cloud hosted databases are not that hard to set up these days (e.g. Supabase makes it really easy, and is backed by Postgresql), so I would recommend doing that unless you have a strong reason to keep it local (perhaps SQLite could be a good option if you want to keep it local, but a little more organized than just dumping to text).

Using a database also allows you to easily query the data later from another location, and makes it easy to support multiple test jigs. Permissions on the database can be locked down to only allow the test jig to insert or update new records (update is needed if you want to re-run devices through the test jig and update the provisioned device record).

If using a database, I would recommend having at least two tables: one for the test attempts and one for the provisioned devices (devices which pass all the tests and are good to go in production).

Test attempt table columns:

  • id: Unique incrementing integer that can identify a test attempt.
  • serial_number: The serial number of the device being tested. This could be any form of unique identifier, such as a MAC address, MCU serial number, or one programmed onto the device yourself.
  • test_result: Either PASSED or FAILED.
  • test_steps: A JSON array of the steps that were taken during the test. Because each test step could contain arbitrary data, it’s usually better to not structure this data and just accept JSON. Each test step could contain information such as a unique name (e.g. CHECK_TEMPERATURE) and a short message (e.g. "Read back temperature of 25.3°C. Lower limit: 24.0°C, Upper limit: 26.0°C.").
  • tester_id: The unique identifier of the test jig that was used to test the device. Typically read from a config file by the test jig software.
  • timestamp: The timestamp of the test attempt.
  • logs: A JSON array of the logs from the test attempt. You may only want to save the logs if the test failed as the logs are usually quite verbose and could quickly consume your storage space.

Only devices which pass all the tests are added to the provisioned devices table. The columns in the provisioned devices table are:

  • id: Unique incrementing integer that can identify a provisioned device.
  • serial_number: The serial number of the device being tested. This could be any form of unique identifier, such as a MAC address, MCU serial number, or one programmed onto the device yourself.
  • pcb_version: The version of the PCB that was tested.
  • firmware_version: The version of the firmware that was programmed onto the device (if applicable). For more complex devices, there may be multiple firmware versions (e.g. bootloader, application, multiple MCUs, e.t.c.). I would recommend storing each one as a separate column.
  • tester_id: The unique identifier of the test jig that was used to test the device. Typically read from a config file by the test jig software.
  • created: The timestamp of the creation of the provisioned device record.
  • last_updated: The timestamp of the last update to the provisioned device record.

Note that while the test jig only needs to insert new records into the test attempt table, it usually needs to be able to insert and update (or just “upsert”) records into the provisioned devices table. This is because the same device may be tested multiple times, e.g.:

  • During development, you will be writing the Python code and running the tests again and again on the same device
  • During production, you may want to test a previously provisioned device if you change something, improve a test, or are just not sure about it’s testing state.

Test Step Parallelism

If your test steps take a significant amount of time, you may want to consider running some of them in parallel. Obviously this would only apply to test steps that are independent of each other (e.g. a test step that needs to reset the MCU cannot really be run at the same time as a test step that needs to communicate with it).

Test step parallelism can be achieved using the built-in threading library in Python. Note that Python threads are not true parallelism due to the GIL (Global Interpreter Lock), but they are fine for this use case because the test steps are typically IO bound, not CPU bound.

If you are providing the tester with a command-line interface, watch out for threads messing up with Ctrl-C (terminate the program) behaviour. Ctrl-C is only caught in the main thread, and if you have other threads currently running parallel test steps, the program will not exit as expected. Two options to fix this are:

  • Make the worker threads daemon threads, so that they are automatically terminated when the main thread exits.
  • Catch the Ctrl-C signal in the main thread and then use a threading.Event to signal to the worker threads to exit. You have to make sure the worker threads periodically check the event and exit if it is signaled.

Suppliers

Redback Test Services

An Australian-based company that design and manufacture test jigs.

INGUN Manual Test Fixture MA 160 YouTube: https://www.youtube.com/watch?v=NeKGzh2sa1k