• No results found

3.5 Design for low power

4.1.3 System level testing

The testbench for the top-level design is sketched in figure 4.3. The testbench can be set up for testing of a single SAMPA or for testing of two SAMPAs in daisy-chained mode by changing a parameter at compile time. When testing only a single SAMPA, the boxes marked in grey are excluded from the simulation to increase the simulation speed. By changing another parameter at compile time, it is possible to run the simulations with gate level code, both with or without back annotated timing. A wrapper encapsulates theSAMPAcode to model some of the analogue behaviours, like for instance theI2C tristate driver and theSLVS enable signals.

The serial link data checkers operate independently from the test sequencer.

In case of certain tests where checking is not needed, they can be disabled to avoid generating unnecessary errors. The task of each checker is to automatically synchronize to the incoming data, like a real serial receiver, do integrity checks on the packet header, verify the header and data are without any parity or Hamming

errors, compare the received header and data against the next expected header and data received from the behavioural model, do a decompression test of the data to verify that the compressed data fits with the raw input data and finally do a graceful self-recovery in case unexpected errors occur, so that testing can continue from the next packet.

If the serial link checker has been marked as not being the last checker in a chain it additionally puts the output data in a buffer to be checked by the next checker in the chain. Since each serial link checker has knowledge of which chip address it receives data from, it ignores the packet that is not from the expected chip address and forwards them directly to the buffer.

The behavioural model for theSAMPAcontains separate processes for model-ling of the clock generator, the event generator, the reset generator, and the 32 channels including filtering, compression, and memory handling. Modelling of the channel pipeline is cycle-accurate and in phase with the pipeline in the SAMPAso that debugging of issues can be easily done by comparing the data in the pipeline between the model and the real code. The behavioural model tries to emulate the memories of the SAMPA as close as possible so that it takes into account over-flows. The memories and its pointers are accessible from the checker through use of shared variables.

The data generators can be set up to generate different sequences of signals depending on what is to be tested. It can produce full range sawtooth waves, which aids in debugging since it makes it easier to compare the output data of the model and the real circuitry when an issue occurs and determine if something happened too early or too late and by how many cycles. It can also be set up to run with random input data, which by adjusting the zero suppression threshold of the filters can easily emulate different data occupancies for verification of buffers and serial links. Additionally, it can read plain CSV files with sample data or custom data files from for instance TPC physics runs in 2010, which has both normal data as well as what is referred to as "black events", which have data from collisions with an exceptionally high amount of particle tracks.

As the testbench has been designed modularly, it is possible to include se-veral devices in a chain if needed. This only requires instantiating new modules and routing the intercommunicating signals, if needed. However, the MCH only required two devices in a chain for the final design [14].

TheBFMs previously created for the module-based testbenches are also reused in the system level testbench. This includes theJTAGandI2C BFMs. Additional BFMs have been created for reading and writing from both the global register, channel register, pedestal memory, and channel order register.

Configuring of the device happens through I2C and since it only operates at maximum 1 MHZ, it takes significant time to set up configurations for different tests. To avoid this, the testbench instead forces values onto the internal signals between theI2Cmodule and the register module inside the SAMPAonce the tes-tbench has been confirmed to be working satisfactorily in a separate test. Forcing and spying on signals is a feature available in the 2008 version of VHDL, but the Cadence simulator has unfortunately only limited support for VHDL-2008. Both Mentor and Cadence, however, support these features through their own special function calls that are not part of the standard. To make the testbench environ-ment uniform across simulators, a generic package per simulator has been created.

It wraps each of these special function with a generic function name so that the testbench can call the functions with a generic name and during compile, only the package file for the specific simulator is included which has the version for the generic function translated to the internal one. In this way, the testbench can be simulator agnostic.

The testbench sequence is mainly focused on verifying the functionality of the design and the extensive verification is left to the module-based testbenches. The testbench verifies that all interfaces to the device functions, this includes I2C, JTAG, memory tester, scan chain, clocks, resets, and triggers. It also verifies all data output modes, this would be the direct readout serialization, the packet based serialization, and the daisy chaining. Additionally, it verifies that all test-modes like the bypass mode, ring buffer, direct readout combinatorial, etc. operate correctly. Overall, this covers the operational functionality of close to everything that can be controlled externally.

4.1.3.1 Scan chain verification

To minimize the workload and to avoid errors, the scan chain insertion, see section 3.3.1, was done by a tool (Cadence Encounter Test, now Modus [73]) instead of manual insertion. In addition to the scan chain insertion, the tool also creates vectors for testing the design and creates a Verilog testbench that runs the

complete vector test on the digital code. To verify that the generated vectors do not produce any false errors, the testbench was run on the gate level code with ti-ming. To additionally verify that the fault-finding algorithm correctly detects an error, a wire was deleted from the design and the testbench was rerun. An error was produced as expected and the error was indicated to be at the point where the error had been introduced.1

4.1.3.2 Clocking and Clock-Domain Crossing verification

Signals crossing between two asynchronous clock domains might experience metas-tability, which could cause incorrect data to be generated [74]. Since the SAMPA design employs four clock domains where some of the clocks might be asynchro-nous if multiple clocks are externally sourced, it is both important to code the design to avoid any metastability and to do verification to confirm that all cros-sings have been checked. The design uses two-register synchronizers for all signals passing between two clock domains [74]. Pulsed signals passing from a fast clock domain to a slow one are converted to signals that toggle on a rising edge be-fore being passed to the slow domain, this still requires pulses to be separated by two or more cycles on the receiving end, but does not require the signal to stay high for multiple cycles. Multi-bit signals from counters without control signal are Gray-encoded before being passed between clock domains to avoid capturing the wrong value if the signal is captured in the receiving domain while some of the bits are still switching over from one value to another. When a control signal is present in addition to the multi-bit signals, only the control signal is synchronized and the data is expected to be valid for enough cycles to sample valid data.

To avoid that signal are inadvertently being used in another clock domain without first being synchronized, a coding convention has been employed. For modules where multiple clock domains are present, the variable names of all signals have been appended with a postfix indicating the clock domain that the signal belongs to, e.g. signal_clkadc, this avoids that errors are introduced while coding, it also aids in spotting these errors while reviewing the code.

To verify correct operation of the design with asynchronous clocks, the top-level testbench was run with the device set to operate with all external clocks and

1Scan chain insertion and testing done by Bruno Sanches and Dionisio Carvalho, University of São Paulo, Brazil

the input clocks set to frequencies that were not a factor of each other. The cell library was modified so that when a metastability is detected in a flip-flop it will randomly produce a 1 or a 0 on its output instead of outputting ’X’ which will propagate through the design and prevent further testing. The regular set of tests were run without issues.

Testbench verification will only catch a subset of the possible clock domain crossing issues so to additionally verify that all crossings have been covered, Men-tor QuestaCDC[75] has been employed. This tool uses structural analysis of the code to detect clock domains and synchronizers and can report on any potential failure modes.

The Clock Domain Crossing (CDC) testing uncovered no specific issues, proving that the employed design methodology prevented these issues.

Since theADCrequires clocks with a low amount of jitter, the clock generator was simulated on the layout level in circuit simulation and was found to have less than 50 ps of added jitter on theADCsampling clock.

4.1.3.3 Mixed-signal verification

Mixed-signal verification1involves connecting the analogue circuitry together with the digital to verify that communication between the analogue and digital works as expected. Since the chip has been created with an analogue-on-top flow, i.e.

the fully placed-and-routed digital design is instantiated as a block on the top-level analogue layout, there is a need to verify that the correct signals have been connected to the correct ports on the digital design.

To speed up the debugging, the Analogue Mixed-Signal (AMS) verification is usually done in three stages. Firstly, a behavioural model of the analogue compo-nents is created in either Verilog-AMSorVHDL-AMS, which can then be simulated together with the digital code at higher simulation speeds. This verifies primarily the interaction between the various parts.

Since the analogue behavioural model is only a simplification, there is also a need to do verification with the schematic netlist of the analogue to check that their behaviour is the same. These tests need to be kept short due to the increased time needed to run the analogue netlist simulations. Here a few samples have been

1AMS simulations conducted by Tiago Weber for v2 and Heiner Alarcon for v3, University of São Paulo, Brazil

tested through the direct serialization mode, through the normal packet based mode and the daisy-chained mode.

The tests also need to be run with the digital gate level code with back an-notated timing, together with the layout version of the analogue, to verify that the timing between the ADC and the digital is in order and that the power-up sequencing is working as it should.

The mixed signal verification helped to uncover issues with signals between the ADCand digital having inverse direction, start-up issues related to the clock gating employed in the direct serialization mode and timing issues between the ADCand digital inputs.