Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This option can be found in the General Options tab.

Expected Results

  • Highest throughput possible

    Although it is difficult to estimate the achievable throughput with a particular device, it is possible to compare with other drivers sharing roughly the same quantity of network buffers or processor speed.

    Development Board

    Device 1

    Device 2

    CPU Speed

    72 MHz

    70 MHz

    CPU Architecture

    ARM® Cortex-M3™

    ARM® Cortex-M4™

    Rx Buffers

    4

    3

    Rx Descr.

    4

    3

    64 byte Datagram

    Socket Call

    33144

    58652

    Throughput (Mbps)

    1.695

    3.002

    1472 byte Datagram

    Socket Call

    27915

    31788.91

    Throughput (Mbps)

    32.866

    37.433

    There is also a practical limit at which the network driver can operate. At one point, as you increase the input data rate, the network driver will be overwhelmed and will start dropping the excess of packets it cannot handle.

    As shown in the figure below, there is a point where the rate of increase in throughput will slow down, and the error rate will increase until the throughput reaches its limit. Depending on the driver’s architecture, increasing the input data rate will decrease the performances of the driver. This is due to an increase in the number of receive interrupts that have to be handled.

  • Few transitory errors.

    See the section on transitory errors on for more information.

  • Low packet loss.

    Packet loss should begin to happen only near or after the driver reached maximum throughput (close to 32 Mbps as in the example in Figure 8-9). If there is a constant packet loss throughout the input data rate range, than something is wrong.

  • Ability to receive packets with a size equal to the MTU.

    See the section on sending packets on for more information.

  • Similar results with target directly connected and on a network.

    Unless there is a heavy broadcasting of packets on the real network, the results should be fairly similar.

  • No buffer leaks.

    See IPerf Test Case for more details.

  • Logging performance results (with the target directly connected, and networked).

...

This test is similar to the previous one, except that we are modifying the size of the payload received by the target. We will set the payload size to 64, 128, 256, 512 and 1024 bytes. By reducing the size of the packet, we can increase the number of packets processed by the target in the same amount of time. By using a payload size of 64 bytes (the smallest payload for a Ethernet frame) you can get the maximum packet rate that you driver can handle.

Expected Results

  • Highest throughput possible

    Once again predicting the achievable throughput might be difficult. As the length of the payload decreases, the packet rate increases to sustain the required data rate. This decrease is likely due to the fact that it is more time consuming to execute the µC/TCP-IP module operation than the transfer the packet from the network device to the processor memory.

  • Few transitory errors

    See the section on transitory errors on for more information.

  • Ability to send packets with a size equal to the MTU.

    See the section on sending packets on for more information.

  • Similar results with the target directly connected and on a network.
  • No buffer leaks

    See IPerf Test Case for more details.

  • Logging performance results (with the target directly connected, and networked).