Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Your target must be able to receive UDP packets reliably and with acceptable throughput. It must also be able to receive UDP packets with a size equal to the MTU.

Test 1: Maximum Bandwidth Receive UDP Test using NDIT

Select the UDPs test tab in the NDIT main window. The UDP test tab appears

The first test we suggest you to run is a 100 Mbps, 1472-byte payload test. It is the most demanding test in terms of data reception, as UDP is a light transport protocol and the CPU will be strained with a flood of UDP datagrams.

The figure below shows the UDPs test tab.

There are four options for the UDP receive test:

Input Bandwidth

To test at a single bandwidth, set the Start and Stop values to the same bandwidth value. Values are in megabits per seconds.

If the Start and Stop values are different, a UDP receive test will be launch with the Start bandwidth. The bandwidth of the subsequent tests will increase by the value of Increment until the Stop bandwidth is reached.

Iterations

NDIT will repeat the UDP receive test and its conditions for the specified number of times.

Payload Size

1472 bytes is maximum value for the payload size, and will maximize throughput.

Multi Size Sweep will repeat the test with payloads of 64, 128, 256, 512, 1024 and 1472 bytes.

Test Duration

This option can be found in the General Options tab.

Expected Results

    • Highest throughput possible

Although it is difficult to estimate the achievable throughput with a particular device, it is possible to compare with other drivers sharing roughly the same quantity of network buffers or processor speed.

Development Board

Device 1

Device 2

CPU Speed

72 MHz

70 MHz

CPU Architecture

ARM® Cortex-M3™

ARM® Cortex-M4™

Rx Buffers

4

3

Rx Descr.

4

3

64 byte Datagram

Socket Call

33144

58652

Throughput (Mbps)

1.695

3.002

1472 byte Datagram

Socket Call

27915

31788.91

Throughput (Mbps)

32.866

37.433

There is also a practical limit at which the network driver can operate. At one point, as you increase the input data rate, the network driver will be overwhelmed and will start dropping the excess of packets it cannot handle.

As shown in the figure below, there is a point where the rate of increase in throughput will slow down, and the error rate will increase until the throughput reaches its limit. Depending on the driver’s architecture, increasing the input data rate will decrease the performances of the driver. This is due to an increase in the number of receive interrupts that have to be handled.

    • Few transitory errors.

      See the section on transitory errors on for more information.

    • Low packet loss.

      Packet loss should begin to happen only near or after the driver reached maximum throughput (close to 32 Mbps as in the example in Figure 8-9). If there is a constant packet loss throughout the input data rate range, than something is wrong.

    • Ability to receive packets with a size equal to the MTU.

See the section on sending packets on for more information.

    • Similar results with target directly connected and on a network.

Unless there is a heavy broadcasting of packets on the real network, the results should be fairly similar.

    • No buffer leaks.

See section “Buffer leaks” for more details.

  • Logging performance results (with the target directly connected, and networked).

Test 2: Payload Size Sweep Receive UDP Test using NDIT

This test is similar to the previous one, except that we are modifying the size of the payload received by the target. We will set the payload size to 64, 128, 256, 512 and 1024 bytes. By reducing the size of the packet, we can increase the number of packets processed by the target in the same amount of time. By using a payload size of 64 bytes (the smallest payload for a Ethernet frame) you can get the maximum packet rate that you driver can handle.

Expected results

  • Highest throughput possible

    Once again predicting the achievable throughput might be difficult. As the length of the payload decreases, the packet rate increases to sustain the required data rate. This decrease is likely due to the fact that it is more time consuming to execute the µC/TCP-IP module operation than the transfer the packet from the network device to the processor memory.

  • Few transitory errors

    See the section on transitory errors on for more information.

  • Ability to send packets with a size equal to the MTU.

    See the section on sending packets on for more information.

  • Similar results with the target directly connected and on a network.
  • No buffer leaks

    See section “Buffer leaks” for more details.

  • Logging performance results (with the target directly connected, and networked).
  • No labels