How to calculate throughput after passing through an AWGN channel?

11 views (last 30 days)
I'm trying to calculate the throughput after passing an OFDM signal through an AWGN channel. I want to build my BER vs Throughput plot, I have already figured out the BER part but I'm having a hard time calculating for the throughput because I can't seem to figure out how to get the time elapsed for the whole transmission process. Thank you!

Answers (1)

Walter Roberson
Walter Roberson on 24 Mar 2022
Edited: Walter Roberson on 18 Apr 2022
Most packet-based message transfer systems work by transfering a packet and then having a mandatory silence period. This is a design that originated from CSMA (Carrier Sense Multiple Access) where multiple transmitters share a common transmission medium, and in order to figure out whose turn it is to talk, each system that wants to talk has to listen for a the minimum period of silence in order to know that the medium is available for another sender to try to take control. The duration of the delay had to do with the maximum length of time to transmit a signal from one side of the network to the other in order to claim temporary ownership of the transmission channel.
These days some technologies use different transmit and receive channels and so can sometimes talk even though a different system is already talking -- but the arrangement of a gap of silence tends to remain in protocols that do not use a central controller.
In order to figure out the tnroughput, you need to figure out the time it takes to transmit a packet of maximum data length, and add to that the mandatory gap time: this gives you the total amount of time that must be devoted to sending that message. Now figure out how much actual data was transmitted in the packet, excluding all packet headers, and removing any layers of error correcting code: that tells you how much actual data was sent in the packet. Amount of "payload" data divided by total time the channel had to be devoted to sending the message, gives you the throughput.
In some protocols, the mandatory gap is specified by absolute seconds. For those protocols, as the frequency of transmission rises, or as more data gets to be sent (multiplexed somehow), the absolute number of seconds tends to come to dominate the throughput. As newer faster protocols get put into place, it gets difficult to leave behind the absolute number of seconds because there is always the theory that there might be an older-protocol device just waiting for its chance to talk.
For newer protocols, the mandatory gap is specified in bit-times (amount of time required to send a bit... or a baud), so as the transmission rate rises, the gap takes the same proportion. But these protocols can find it a challenge to detect that older devices are present and want to talk.
In real systems, rather than just theory, sometimes the sending system does not have enough resources to be able to put out packets at full speed, or the transmission hardware might not operate at full speed, so you typically measure for real systems rather than just calculating the theory.
In addition: some protocols, such as TCP/IP, do not just send packets off and hope that the receiver got them correctly: the protocols offer ways for the receiver to detect corrupted packets, or detect packets that haven't arrived, or detect that maybe the channel has broken, and the receiver then sends a message asking the transmitter to re-send. (But how does the receiver know that the transmitter received the please-resend??) These kind of "reliable transmission protocols" cannot just assume that everything was received okay if they are not told otherwise -- suppose the receiver was turned off, for example. So there are carefully considered error detection and control mechanisms -- and because there are maximum amounts of data that can be transmitted before the sender needs reassurance that the data was received, then at higher speeds / larger files, the effective throughput can turn out to be limited by the error detection / control mechanisms.
In the first approximation, and for unreliable transmission protocols, "throughput" is "amount of payload data transmitted divided by time required to transmit it." Notice that this says nothing about how long the signal takes "in flight". The next packet can potentially go out immediately after the (gap for the) previous one: you can have multiple packets "in flight". "Throughput" as a concept does not care whether the packets are going to the next room or going from Saturn to Earth, just about how much data is transmitted in how many seconds. Delays are not relevant for this.
However, for "reliable" transmissions, where you might have to stop transmitting until the receiver acknowledges that it received data successfully, then the time it takes to get the data to the receiver and the time it takes the signal to come back from the receiver, do depend upon the signal speed (and distance it has to travel.) This means that potentially the measured throughput for a reliable transmission could end up considerably lower than the theoretical. You can model this, but you have to have a model that supports (virtual) delays.
When you are using "reliable" transmissions and your transmission medium is subject to corruption (the AWGN you mentioned) then error detection and correction start becoming important: ECC might be able to correct some errors without requiring re-transmission. But statistically speaking AWGN might happen to have a burst of noise too strong (or unfortunately timed) that corrupts a packet beyond repair. At that point the details of the channel error control protocols become important.
It is completely valid to want to measure or model throughput under various levels or varieties of noise taking into account the error detection and correction mechanisms . And even to try different ECC methods or different reliability protocols against each other, and figure out which is best suited for which conditions. If a particular ECC protocol reduces the amount of raw data per packet by 1%, but saves a retransmission on (say) 1 packet in 1000 under simulated noise conditions, then is that worth doing? Maybe it is, if it takes 3 hours round-trip.
To model including error correction, you need to know the delay between when the transmitter makes the signal available and the time the signal starts transmitting, and you need to know the time for the signal to get to the receiver, and you need to know the delay at the receiving end after it has received the end of the signal before it figures out that some or all of the signal needs to be retransmitted, and then the time to create the retransmit request, and the delay before it can start to be sent out, and the time to get back, and the time to finish receiving it, and the delay before the other end can start acting on it...
  1 Comment
Toshi
Toshi on 2 May 2024
@Walter Roberson could you tell me how will you put the read time now in transmitted signal and received signal to calculate the delay

Sign in to comment.

Categories

Find more on WLAN Toolbox in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!