How accurate is my RSSI reading on Si4x6x rev B1B?
The question in the title can be interpreted in two ways:
1. How accurate is my RSSI reading on any one chip?
2. How much variation can I expect from chip to chip?
We will cover both of these topics in this article. Before we dove into measurement results however, here is a brief overview of RSSI operation on the chip.
There are two different RSSI values that can be read back from the chip. One is latched RSSI the other one is current RSSI. Current RSSI reflects the measurement value at the latest 1Tb time period and gets updated in each 1Tb time period. As indicated by its name latched RSSI is only one RSSI value that is captured from the current RSSI value at a specific point in time in the receive process. The receiver will retain this latched RSSI value until it is re-started by either the automatic state machine on the chip or by the host MCU. The latching event can be configured in filed “LATCH” in API property “MODEM_RSSI_CONTROL”. Find below a recap of what possible events can be set for latching.
Name Value Description
DISABLED 0 Latch is disabled. The returned value of the Latched RSSI will always read 0.
PREAMBLE 1 Latches RSSI at Preamble detect.
SYNC 2 Latches RSSI at Sync Word detect.
RX_STATE1 3 Latches RSSI at 4*Tb after RX is enabled (7*Tb if AVERAGE = 0).
RX_STATE2 4 (only with AVERAGE=0) Latches RSSI at 8*Tb after RX is enabled.
RX_STATE3 5 (only with AVERAGE=0) Latches RSSI at 12*Tb after RX is enabled.
RX_STATE4 6 (only with AVERAGE=0) Latches RSSI at 16*Tb after RX is enabled.
RX_STATE5 7 (only with AVERAGE=0) Latches RSSI at 20*Tb after RX is enabled.
Note, that options “PREAMBLE” and “SYNC” only make sense in packet mode operation. Don’t use them in direct mode applications! A few of the entries in the table refer to a value called “AVERAGE”. The current RSSI value (that is to become the latched RSSI value at the latching event) gets updated at 1*Tb bit period intervals, however this value can either be the result of one measurement on the previous 1*Tb cycle or the average of the previous 4*Tb bit cycles. The “AVERAGE” value decides between the two modes of operation in the following fashion:
Name Value Description
AVERAGE4 0 The RSSI value is updated at 1*Tb bit period intervals but always reflects the average value over the previous 4*Tb bit periods.
BIT1 2 The RSSI value is updated every 1*Tb bit period.
Averaging over 4 bit periods will make the reading more accurate at the price of delaying the 1st valid reading by 3Tb. So the setting of this field must be a result of a trade-off between accuracy and 1st valid RSSI access time. This averaging setting is applicable to both the current and latched RSSIvalues.
Both the latched and current RSSI values can be read back with the GET_MODEM_STATUS command (refer to the API document for details). On top of this the latched RSSI value can also be read back from one of the FRRs (Fast Response Register) to shorten the access time. Refer to sectionFRR_CTL in the API document for details. To avoid confusions it is important to note that the current RSSI value cannot be read back from the FRRs. Now, let’s get back on the original questions.
1. How accurate is my RSSI reading on any one chip?
This question translates to quantifying somehow the spread one may see on the results when reading back the current RSSI value at the same input power level. If we assume a normal distribution on the RSSI samples the standard deviation of the results will give us insight into the repeatability of thereading at any one power level. Below graph shows both the average and the standard deviation of the current RSSI value read back 500 times at each power level parameterized with different DR settings – 1, 40, 100 and 500 with Rx bandwidths of 2kHz, 93kHz, 206kHz and 825 kHz, respectively . No averaging was configured on the chip itself.
Both ends of the curves are clipped to some fixed value. At the higher end of the dynamic range this value is constant over DR settings. At the lower end of the dynamic range however this value changes with DR. The lower the DR the narrower the filter bandwidth which in turn means less noise the receiver is seeing hence the lower clipping value (also referred to as noise floor). Also interesting to note that at the lower end of the dynamic range the standard deviation of the reading is higher. Fundamentally on noise the standard deviation is higher and as long as the wanted signal remains close to the noise floor the reading also remains noisy.
Now what to make of all of this data? Let’s take one example. Let’s examine the question of how many times do I need to read back the current RSSIvalue at sensitivity level to be able to say that my averaged result always stays within +/- 1 dB.
Typically sensitivity level is 10 dB above the noise floor (this is true for 2GFSK cases with modulation index of 1). Let’s zoom onto one of the traces (40 kbps) from the above plot for this example.
Our noise floor is around 26 RSSI codes. As 1 RSSI code means 0.5 dB jump in power level our expected sensitivity level is around 26 + 2*10 = 46RSSI codes (from the graph this means around -111 dBm). Our standard deviation value at this point is around 4.5 RSSI codes. If we assume a normal distribution the probabilities of a read back staying within so many standard deviations from the average are the following:
distance from average in standard deviation probability of read back staying within the distance
1 0.682689492137
2 0.954499736104
3 0.997300203937
4 0.999936657516
5 0.999999426697
6 0.999999998027
Let’s pick a 1ppm failure rate that corresponds to a distance of 5 standard deviations. (This means that only 1 read back will be outside of the +/- 5 standard deviation window in a million reading.) So returning to our example the read backs at sensitivity level will be staying within +/- 22.5 codes of the average value at a failure rate of 1 ppm. This translates to +/- 11.25 dB. This value does not meet our original +/- 1dB specification, our standard deviation is too high for that. We would need a standard deviation of 0.4 RSSI code to stay within the +/- 2 RSSI code target. With averaging the standard deviation scales with the square root of the averaging number. (st_dev_avg_N = st_dev / sqrt(N) ). We need to scale down the standard deviation by a factor of 4.5/0.4 = 11.25, which means an averaging of 11.25^2 = 126.5625 which rounded up yields a number of 127. So if a +/- 0.5 dB accurate RSSI reading is needed at sensitivity level current RSSI must be read back 127 times (at least 1Tb apart) and an average must be calculated.
This has been an extreme example but following the logic you can calculate your RSSI reading accuracy to a certain confidence level around sensitivity using the data from the 1st graph.
As a more realistic example if averaging over 4Tb is used on the chip the standard deviation numbers on the graphs will get halved meaning around 2.25 RSSI codes at sensitivity level.
There is one more important aspect that is worth mentioning in this section. As you can see from the graphs the standard deviation does not converge to 0 with ever higher input power levels. One would expect this behavior as the SNR (Signal to Noise Ratio) increases. At this region (from sensitivity + 20 dB to the upper clipping end of the dynamic range) the deviation we are seeing is not caused by noise but rather the mode of implementation of theRSSI measurement. This means a 1 dB ripple on read back value. What is important here is that read back distribution is no longer normal so the calculations above do not hold in this region. The RSSI read back simple toggles between (at most) 3 codes.
2. How much variation can I expect from chip to chip?
In this section we examine how much the average reading changes from chip to chip at a given power level. We assume that the RSSI curves look the same on each chip and only focus on how much these curves can shift. For this find below a CCDF (Cumulative Complementary Distribution Function) that shows the distribution of the average RSSI reading at a decent -60 dBm input power level on 20 chips taken from various lots parameterized also with temperature.
Temperature [degreeC] -40 25 85
Average [RSSI code] 152,651 150,445 146,6925
Standard Deviation [RSSI code] 0,983281 0,999483 1,168422
Assuming normal distributions again one can calculate the RSSI shift from part to part to a certain confidence level following the logic from the previous chapter. As an example at room temperature the parts will read back an average RSSI value that stays within +/- 2.5 dB of the average of the whole population.
This variation however can be eliminated with a one point calibration at production test. API property MODEM_RSSI_COMP contains an offset value that directly affects the RSSI reading.
A larger compensation value will adjust the returned RSSI value upwards, and a lower value will adjust the RSSI value downwards. Refer to the API document for more details. Note, that the resolution of the RSSI reading restricts the accuracy of the calibration to +/- 0.5 dB.
As you can see from the graph there is a consistent shift on the curves with temperature. This is around a 2 RSSI code increase from 25 to -40 degree C and a 4 RSSI code decrease from 25 to 85 degree C. This shift however may be compensated by utilizing the on-chip temperature sensor.