iperf3 is useful to testing network throughput
https://netbeez.net/blog/linux-for-network-engineers-iperf3-bidirectional-test/
https://www.datapacket.com/blog/10gbps-network-bandwidth-test-iperf-tutorial
https://iperf.fr/iperf-doc.php
https://stackoverflow.com/questions/47035263/iperf-tcp-much-faster-than-udp-why

On RHEL/Rocky/CentOS 8+ install it via:

sudo dnf install iperf3

Temporarily add firewall rules to allow incoming traffic for the iperf3 server (do this on any computer running iperf3 -s, that is in server mode):

sudo firewall-cmd --add-port 5201/tcp
sudo firewall-cmd --add-port 5201/udp

Get the IP address of the computer you're going to run the server on and start the server:

ip addr
iperf3 -s

On the other computer run iperf3 in client mode to run some tests, it defaults to TCP.

A quick test I use for 10Gbit connections is:

iperf3 --bidir -P 4 -c 172.18.18.179 -t 30

–bidir means run bidirectional, I do this to stress the module/connection/etc being tested, so here a 10Gbit connection would push 10Gbit in both directions for 20Gbit total
-P 4 means use 4 parallel streams, this sometimes helps saturate a connection where a single string doesn't.
-c ip.add.r.ess means connect to this IP where the iperf3 server is
-t 30 means run the test for 30 seconds

A summary of results might for the above command might look like this, the last 2 lines showing the total throughput of the 4 streams combined:

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID][Role] Interval           Transfer     Bitrate         Retr
[  5][TX-C]   0.00-10.00  sec  5.45 GBytes  4.68 Gbits/sec    0             sender
[  5][TX-C]   0.00-10.04  sec  5.44 GBytes  4.66 Gbits/sec                  receiver
[  7][TX-C]   0.00-10.00  sec  5.45 GBytes  4.68 Gbits/sec    0             sender
[  7][TX-C]   0.00-10.04  sec  5.44 GBytes  4.66 Gbits/sec                  receiver
[SUM][TX-C]   0.00-10.00  sec  10.9 GBytes  9.36 Gbits/sec    0             sender
[SUM][TX-C]   0.00-10.04  sec  10.9 GBytes  9.32 Gbits/sec                  receiver
[  9][RX-C]   0.00-10.00  sec  5.44 GBytes  4.68 Gbits/sec    0             sender
[  9][RX-C]   0.00-10.04  sec  5.44 GBytes  4.65 Gbits/sec                  receiver
[ 11][RX-C]   0.00-10.00  sec  5.44 GBytes  4.67 Gbits/sec    0             sender
[ 11][RX-C]   0.00-10.04  sec  5.44 GBytes  4.65 Gbits/sec                  receiver
[SUM][RX-C]   0.00-10.00  sec  10.9 GBytes  9.35 Gbits/sec    0             sender
[SUM][RX-C]   0.00-10.04  sec  10.9 GBytes  9.31 Gbits/sec                  receiver

iperf Done.

Notes:

Testing using UDP:

 iperf3 -P 4 -c 172.18.18.179 -u -b 2.5G -l 65000

Here I omitted the –bidir option since it seems to produce high packet loss (maybe it's my setup…)
-u means use UDP
-b 2.5G means use 2.5Gbit of bandwith per stream, I'm using 4 streams for a total of 10Gbit
-l is the buffer length. With UDP it defaults to a low value and I could only get 3Gbit throughput where setting it to 65000 I could get 8.20Gbit.

Output of above command:

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  2.40 GBytes  2.06 Gbits/sec  0.000 ms  0/39565 (0%)  sender
[  5]   0.00-10.04  sec  2.40 GBytes  2.05 Gbits/sec  0.014 ms  0/39565 (0%)  receiver
[  7]   0.00-10.00  sec  2.40 GBytes  2.06 Gbits/sec  0.000 ms  0/39565 (0%)  sender
[  7]   0.00-10.04  sec  2.40 GBytes  2.05 Gbits/sec  0.009 ms  0/39565 (0%)  receiver
[  9]   0.00-10.00  sec  2.40 GBytes  2.06 Gbits/sec  0.000 ms  0/39565 (0%)  sender
[  9]   0.00-10.04  sec  2.40 GBytes  2.05 Gbits/sec  0.009 ms  0/39565 (0%)  receiver
[ 11]   0.00-10.00  sec  2.40 GBytes  2.06 Gbits/sec  0.000 ms  0/39565 (0%)  sender
[ 11]   0.00-10.04  sec  2.40 GBytes  2.05 Gbits/sec  0.018 ms  0/39565 (0%)  receiver
[SUM]   0.00-10.00  sec  9.58 GBytes  8.23 Gbits/sec  0.000 ms  0/158260 (0%)  sender
[SUM]   0.00-10.04  sec  9.58 GBytes  8.20 Gbits/sec  0.013 ms  0/158260 (0%)  receiver

iperf Done.

Here UDP is useful for checking jitter and packet loss. UDP is for lower bandwidth applications though (from what I've read) so I'm not sure how helpful saturating a 10Gbit connection with UDP for diagnostic purposes is.