Case study · Testing & validation

PULSE: Test bench automation – from zero to productive in 8 weeks

On-site developed test automation for HV battery testing with climate chamber, source/sink and BMS via CAN – with end-to-end sequences, automated evaluation, DUT fault detection and audit-proof data logging.

Request test automation project
60 % reduced
Test time per run
78 % more precise
Fault detection
- 18 t€ saved
license costs p.a.
8 weeks
fully automated

Initial situation & project scope

Situation in the initial state

  • Each individual component (climate chamber, control electronics, power electronics, data acquisition, battery) had to be configured manually by an operator.

  • Manual start/stop of tests and of each component, separate user interfaces for each component, Excel-based result reports only from the data logger (no recording of other components).

  • No resume capability after a fault or power outage – test runs often had to be restarted from scratch.

  • High dependency on multiple software licences and individual know-how in the lab for different components.

  • Different procedures depending on the test engineer – comparability and reproducibility were hardly ensured.

Targets & KPIs in the initial state

Metric Before Unit Comment
Test time per run 10 h incl. manual evaluation
Report generation 2 h Screenshots, manual report creation
different licences in use 6 Licences ~28.000 € annual licence cost

Tool landscape

Multiple tools used in parallel with high licence costs led to complex workflows and issues in data conversion between target signals, measurement, logging and reporting.

Manual procedures

Start/stop, limit checks and plausibility checks were performed manually. Fault detection was largely random and depended heavily on the experience of individuals – not on systematic rules.

Reporting & data

Reports were created in Excel, screenshots were added and sent as PDF. There was no audit-proof, centrally versioned data repository.

Safety & compliance

Safety measures and automatic emergency shutdowns were only partially integrated into the test sequences. Traceability was limited (e.g. the exact reason for an aborted test was not always clear).

Scope: automation architecture & focus of the study

The goal was end-to-end test automation – from describing test sequences and connecting equipment through to data processing and reporting – without depending on proprietary single-purpose tools.

Process definition & risk assessment

Design of a modular communication and sequence description with parameters, limits, start/stop/restart mechanisms and safety checks and shutdown logic – standardised across all tests.

Device drivers & interfaces

Development of dedicated communication interfaces for test equipment (climate chamber, control/power electronics, data acquisition, battery) using various communication protocols (CAN, CAN FD, LIN, Ethernet, RS232, USB), including robust error handling.

Data, reporting & traceability

Real-time data logging, automatic limit checks, PDF/CSV reports, audit-proof storage and clear linking to test cases (e.g. via a Jira/CSV bridge) – robustness and reproducibility confirmed in audits.

Project timeline & enablement

On-site in around 8 weeks: first working prototype (partial automation) after one month, requirement-compliant full automation in the second month, followed by rollout, training, documentation and stabilisation of the tool landscape.

System after automation

Situation after automation

  • A fully integrated process for controlling the test equipment (climate chamber, control/power electronics, data acquisition, battery) – with clearly defined test steps and parameters.

  • Automatic fault detection for all test devices and DUTs – monitoring of relevant limits increases accuracy and reduces misinterpretation.

  • Automatic notification of the test owner (email, SMS) in case of fault detection, limit violation or test abort – including all relevant measurement data and logs.

  • Automatically generated PDF and CSV reports, audit-proof storage and clear assignment to the corresponding test programme.

Measurable results

Metric After Comment
Test time per run 4 h Fully automated flow, less waiting time
Report generation 3 min Auto-report (PDF + CSV) directly after test completion
Licences in use 2 Consolidated software landscape, fewer dependencies

Throughput & test time optimisation

Shorter test duration per run and less setup effort enable more runs per week – with the same team size in the lab.

Fault detection & accuracy

Automated fault detection at all test devices and DUTs. Limit checks and standardised evaluation logic increase comparability – and relieve experts from routine checks.

Software consolidation

Replacing redundant tools and proprietary scripts reduces costs, simplifies IT security and makes the automation solution maintainable in the long term.

Safety · compliance · data integrity

Interlocks, watchdogs, safe stop sequences, hash stamps and structured data storage make evidence for UN38.3 / IEC 62133 / ECE R100 audit-ready.

Success: technological added value & organisational impact

Test bench automation did more than just reduce test time.

The solution standardises workflows, delivers robust evidence for audits and makes the lab scalable – while significantly reducing licence costs.

Organisational & technical added value

Automation decouples test quality from individual people, creates standardised workflows and reduces coordination effort between lab, purchasing and engineering.

At the same time, the architecture remains open: sequences can be extended in-house, new devices are integrated via drivers and the system can be rolled out to other labs as well.

For the customer this means: results in less time, higher test quality and ongoing savings that typically pay back the investment in under 6 months – with around 3 runs per week and 4 hours saved per run.

Ähnliche Effekte erzielen wir mit unserer Kostenoptimierungsmethodik NOVA – etwa bei der Reduktion von Stücklisten- und Produktionskosten von Batteriepacks. Zur NOVA-Fallstudie

−60 %

Reduced test time

From 10 h down to 4 h per run.

−97,5 %

Faster reporting

From 120 minutes to 3 minutes.

+78 %

Higher test accuracy

thanks to automated fault detection.

−18.000 €

Tool costs saved

From 6 tools down to 2 core tools.

Interested in 60% less test time and more precise results?

We’ll show you how to gradually automate your existing test bench landscape – from the first potential analysis through to a productive solution with measurable impact.

Schedule an initial call

Mehr über das Team hinter der Prüfstandautomation erfahren Sie auf unserer Über-uns-Seite .

Step-by-step approach to test bench automation

We work iteratively – starting with an early prototype on the test bench and clear milestones on the way to full productive automation. Typically, a project runs like this:

1

Short call (15–30 min): understanding your lab environment, DUTs and goals.

2

Remote or on-site assessment (1–2 days): inventory of equipment, interfaces and existing tools.

3

Concept & proposal: architecture concept, effort estimate, high-level timeline.

4

Prototype implementation (approx. 4 weeks): first partial automation with selected sequences.

5

Prototype go-live: parallel operation alongside existing workflows, fine-tuning based on lab feedback.

6

Extension to full automation: coverage of all relevant test cases and special cases.

7

Introduction of auto-reporting, data storage & interfaces (e.g. Jira/CSV).

8

Training of lab and development teams, handover of the sequence library.

9

Stabilisation & rollout to additional test benches / sites.

10

Review workshop & refinement: documentation, SOPs, payback analysis.