This section presents the keys characteristics of audio 1.0 specification that should be understood to use Micriµm audio class. Note that MIDI interface is mentioned in this section but it is not supported by the current audio class implementation. MIDI is referred to better understand the audio 1.0 device in its entirety.
Functional Characteristics
An audio device is composed of one or several Audio Interface Collection (AIC). Each AIC describes one unique audio function. Thus, if the audio device has several audio functions, it means that several AIC can be active at the same time. An AIC is formed by:
- One mandatory AudioControl (AC) interface
- Zero or several optional AudioStreaming (AS) interfaces
- Zero or several optional MIDI interfaces
presents a typical composite audio device:
An AC interface is used to control and configure the audio function before and while playing a stream. For instance, AC allows to mute, change the volume, control tones (bass, mid, treble), select a certain path within the device to play the stream, mix streams, etc. It uses several class-specific requests to control and configure the audio function. An AS interface transports audio data via isochronous endpoints into and out of the function. An audio stream can be configured by using certain class-specific requests sent to the AS interface. The available configuration is the sampling frequency and/or the pitch. A MIDI interface carries MIDI data via bulk endpoints.
An audio function has the following endpoints characteristics:
- One pair of control IN and OUT endpoints called the default endpoint.
- One optional interrupt IN endpoint.
- One or several isochronous IN and/or OUT endpoints (mandatory only if at least one AS interface is used).
- One or several bulk IN and/or OUT endpoints (mandatory only if at least one MIDI interface is used).
describes the usage of the different endpoints:
Besides the standard enumeration process, control endpoints can be used to configure all terminals and units. Terminals and units are described in the next section Audio Function Topology. The interrupt IN endpoint is used to retrieve general status about any addressable entities. It is associated to the additional class-specific requests: Memory and Status requests. In practice, the interrupt IN endpoint is rarely implemented in audio 1.0 devices because Memory and Status requests are almost never used.
AS interfaces use isochronous endpoints to transfer audio data. Isochronous transfers were designed to deliver real-time data between host and device with a guaranteed latency. The host allocates a specific amount of bandwidth within a frame (i.e. 1 ms) for isochronous transfers. These ones have priority over control, and bulk. Hence, isochronous are well-adapted to audio data streaming. An audio device moving data through USB operates in a system where different clocks are running (audio sample clock, USB bus clock and service clock. Refer to section 5.12.2 of USB 2.0 specification for more details about these clocks). These three clocks must be synchronized at some point in order to deliver reliably isochronous data. Otherwise, clock synchronization issues (for instance, clock drift, jitter, clock-to-clock phase differences) may introduce unwanted audio artifacts.These clock synchronization issues are a one of the major challenges when streaming audio data between the host and the device. In order to take up this challenge, USB 2.0 specification proposes a strong synchronization scheme to deliver isochronous data. There are three types of synchronization:
- Asynchronous: endpoint produces and consumes data at a rate that is locked either to a clock external to the USB or to a free-running internal clock. The data rate can be either fixed, limited to a certain number of sampling frequencies or continuously programmable. Asynchronous endpoints cannot synchronize to Start-of-Frame (SOF) or any other clock in the USB domain.
- Synchronous: endpoint can have its clock system controlled externally through SOF synchronization. The hardware must provide a way to slave the sample clock of the audio part to the 1 ms SOF tick of the USB part to have a perfect synchronization. Synchronous endpoints may produce or consume isochronous data streams at either a fixed, a limited number or a continuously programmable data rate.
- Adaptive: endpoint is the most capable because it can produce and consume at any rate within their operating range.
Refer to section '5.12.4.1 Synchronization Type' of USB 2.0 specification for more details about synchronization types.
An AS interface must have at least two alternate setting interfaces:
- One default interface declaring 0 endpoint. This interface is used by the host to temporarily relinquish USB bandwidth if the stream on this AS interface is not active.
- One or several other alternate setting interfaces with at least one isochronous endpoint. These alternate settings are called operational interfaces, that is the stream is active. Every alternate setting represents the same AS interface but with the associated isochronous endpoint having a different characteristic (e.g. maximum packet size). When opening a stream, the host must select only one operational interface. The selection is based on the available resources the host can allocate for this endpoint.
Audio Function Topology
An audio function is composed of units and terminals. Units and terminals form addressable entities allowing to manipulate the physical properties of the audio function. shows an example of audio function topology with some units and terminals:
A unit is the basic building block of an audio function. Connected together, units allow to fully describe most audio functions. A unit has one or more Input pins and one single Output pin. Each pin represents a cluster of logical audio channels inside the audio function. A unit model can be crossed by an information that is of a digital nature, fully analog or even hybrid audio functions. Each unit has an associated descriptor with several fields to identify and characterize the unit. Audio 1.0 defines five units presented in .
A terminal is an entity that represents a starting or ending point for audio channels inside the audio function. There are two types of terminal presented in table : Input and Output terminals. A terminal either provides data streams to the audio function (Input Terminal) or consumes data streams coming from the audio function (Output Terminal). Each terminal has an associated descriptor.
The functionality represented by a unit or a terminal is managed through audio controls. A control gives access to a specific audio property (e.g. volume, mute). A control is managed by using class-specific requests with the default control endpoint. Class-specific requests for a unit or terminal's control are addressed for the AC interface. A control has a set of attributes that can be manipulated or that present additional information on the behavior of the control. The possible attributes are:
- Current setting attribute (CUR)
- Minimum setting attribute (MIN)
- Maximum setting attribute (MAX)
- Resolution attribute (RES)
- Memory space attribute (MEM)
The class-specific requests are GET and SET requests whose general structure is shown in .
As shown in , there are also class-specific requests addressed to AS interface or isochronous endpoint permitting to manage some other controls. These controls are presented in .
Units and terminals descriptors allows the USB audio device to describe the audio function topology. By retrieving these descriptors, the host is able to rebuild the audio function topology because the interconnection between units and terminals are fully defined. Units and terminals descriptors form class-specific descriptors associated to the AC interface. There are also class-specific descriptors associated to AS interface and its associated isochronous endpoint (refer to audio 1.0 specification, section 4 'Descriptors' for more details about AC and AS class-specific descriptors and their content). These AS class-specific descriptors gives details about the audio data format manipulated by the AS interface. The audio 1.0 specification defines three audio data formats which encompass some uncompressed and compressed audio formats:
Type I format
- PCM
- PCM8
- IEEE_Float
- ALaw and µLaw
Type II format
- MPEG
- AC-3
Type III format based on IEC1937 standard
Refer to Audio 1.0 Data Formats specification, section 2 'Audio Data Formats' for more details about these formats.
Feedback Endpoint
The USB 2.0 specification states that if isochronous OUT data endpoint uses the asynchronous synchronization, an isochronous feedback endpoint is needed. The feedback endpoint allows the device to slow down or speed up the rate at which the host sends audio samples per frame so that USB and audio clocks are always in sync. A few interesting characteristics of the feedback endpoint are:
- Initially known as feedback endpoint, the USB 2.0 specification has replaced the name of feedback endpoint by Synch endpoint.
- Feedback endpoint is always in the opposite direction of isochronous data endpoint.
- Feedback endpoint is defined by a refresh period, period at which the host asks for the feedback value (Ff).
- An extended standard endpoint descriptor is used to describe the association between a data endpoint and its feedback endpoint. The fields part of the extension are:
- bSynchAddress: The address of the endpoint used to communicate synchronization information if required by this endpoint. Reset to zero if no synchronization pipe is used.
- bRefresh: This field indicates the rate at which an isochronous synchronization pipe provides new synchronization feedback data. This rate must be a power of 2, therefore only the power is reported back and the range of this field is from 1 (2 ms) to 9 (512 ms).
Ff is expressed in number of samples per (micro)frame for one channel. The Ff value consists of:
- an integer part that represents the (integer) number of samples per (micro)frame and,
- a fractional part that represents the “fraction” of a sample that would be needed to match the sampling frequency Fs to a resolution of 1 Hz or better.
There are 2 different Ff encoding depending of the device speed.
31 | 30 | 29 | 28 | 27 | 26 | 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
FS: Unsigned encoding. 3 bytes are needed. Lower optional 4 bits used to extend precision of Ff, otherwise 0. | |||||||||||||||||||||||||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | Nbr of samples per frame for one channel | Fraction of a sample | 0 | 0 | 0 | 0 | ||||||||||||||||||
HS: Unsigned encoding. 3 bytes are needed. Lower optional 3 bits used to extend precision of Ff, otherwise 0. | |||||||||||||||||||||||||||||||
0 | 0 | 0 | 0 | Nbr of samples per µframe for one channel | Fraction of a sample | 0 | 0 | 0 |
Full-speed encoding is called format 10.10 (without fraction extension) or 10.14 (with fraction extension).
Full-speed encoding is called format 12.13 (without fraction extension) or 16.16 (with fraction extension).
Refer to USB 2.0 specification, section 5.12.4.2 'Feedback' for more details about the feedback endpoint.