Socketcan and LinCAN benchmarks

classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|

Socketcan and LinCAN benchmarks

Michal Sojka
Dear socketcan users and developers,

while working on a paper for the eleventh Real-Time Linux Workshop, we
have compared timing properties of two Linux CAN drivers (Socketcan
and Lincan). For those who are interested in the results, they can be
seen at [http://rtime.felk.cvut.cz/can/benchmark/1/]. See also comments
to some testcases bellow.

The main goal of this work was to find whether and how socketcan's
integration with Linux networking layer influences negatively
communication latencies. The results is, in brief, that there is some
impact, but still socketcan it is suitable for most of our CAN-related
projects.

The testbed was a Pentium 4 box with a Kvaser quad-head PCI card
(SJA1000 based). We have simply connected output 1 to output 2 and
measured the time how long it takes for the message to go forth and
back. We have used canping [1] tool to generate CAN traffic.
Canping uses our VCA (virtual CAN API) library, which provides common
API for different driver back-ends. If somebody feels this tool might
be useful for socketcan we will simplify it to have no external
dependencies so that it can be included to socketcan/tools.

If somebody would like to reproduce our results all the code,
configurations etc. is available from Git repository - see
[http://rtime.felk.cvut.cz/gitweb/can-benchmark.git].

Table of Contents
=================
1 Additional comments
    1.1 Socketcan versions
    1.2 Graph description
    1.3 Testbed details
    1.4 Main result
    1.5 2.6.26 virtual driver problem
    1.6 Kernel 2.6.31-rc7 has very good performance
    1.7 Related discussions
2 Footnotes


1 Additional comments
======================

1.1 Socketcan versions
-----------------------

All tests used socketcan from trunk at SVN revision 1009. The
exception is the kernel 2.6.31-rc7, for which the version in mainline
was used.

1.2 Graph description
----------------------

To graph the results, we use so called latency profiles. It is a
cumulative histogram with reversed vertical axis displayed in
logarithmic scale. This way we can see the exact number of packets
with worst latencies at the bottom right part of the graph.

1.3 Testbed details
--------------------

- Our CPU supports hyper-threading (HT); for kernels with suffix
  maxcpus=1 (kernel command line switch), HT was not used.
- irqbalance was not running
- CAN baudrate was 1 Mbit/s

1.4 Main result
----------------

[http://rtime.felk.cvut.cz/can/benchmark/1/by-clck/2400/ethflood64k/2.6.29.4-rt16%3Amaxcpus%3D1//ethflood64k.png]

This graph is the main result of our testing. If a system is loaded
simultaneously by Ethernet and CAN traffic, socketcan's worst-case
latency is increased and cannot be lowered by any tuning (see the red
lines).

For Lincan, if one sets high priority on IRQ thread servicing CAN
interrupts, the worst-case latency is the same as in the unloaded case
(see 00-rtt test).

1.5 2.6.26 virtual driver problem
----------------------------------

According to
[http://rtime.felk.cvut.cz/can/benchmark/1/by-clck/2400/rtt-virtual/2.6.26-2-686//rtt-virtual.png],
under Debian's 2.6.26 kernel, socketcan's virtual driver introduces
very high delays (0.5 second). We do not have explanation for this.

1.6 Kernel 2.6.31-rc7 has very good performance
------------------------------------------------

The graphs at
[http://rtime.felk.cvut.cz/can/benchmark/1/kern-2.6.31-rc7.html] look
almost the same as graphs for rt_preempt (-rt) kernels and socketcan
often perform as fast as Lincan. It seems that there are no relevant
changes in networking code as well as in socketcan. Therefore we guess
this might be related to some changes in RCU and socketbuffers.

1.7 Related discussions
------------------------

[https://lists.berlios.de/pipermail/socketcan-core/2006-August/000299.html]
[http://sourceforge.net/forum/forum.php?thread_id=1457719&forum_id=170893]


2 Footnotes
============

[1] [http://rtime.felk.cvut.cz/gitweb/canping.git]

Michal Sojka
_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Re: Socketcan and LinCAN benchmarks

Oliver Hartkopp
Michal Sojka wrote:


> while working on a paper for the eleventh Real-Time Linux Workshop, we
> have compared timing properties of two Linux CAN drivers (Socketcan
> and Lincan). For those who are interested in the results, they can be
> seen at [http://rtime.felk.cvut.cz/can/benchmark/1/]. See also comments
> to some testcases bellow.


Hi Michal and Pavel,

thanks for this benchmark!


> The main goal of this work was to find whether and how socketcan's
> integration with Linux networking layer influences negatively
> communication latencies.


I did some benchmarks with a 2.6.30 LTTng Kernel some weeks ago. But i did not
add any (remarkable) Ethernet traffic to check that influence also. Good idea.


> The results is, in brief, that there is some
> impact, but still socketcan it is suitable for most of our CAN-related
> projects.


Very good. This was also my conclusion.


> The testbed was a Pentium 4 box with a Kvaser quad-head PCI card
> (SJA1000 based). We have simply connected output 1 to output 2 and
> measured the time how long it takes for the message to go forth and
> back.


The idea behind my setup was to benchmark the chardev vs netdev in general.
For that reason, i used a PEAK PCMCIA card (which had an exclusive IRQ line)
and compiled the same driver with and without netdev support:

make KERNEL LOCATION=/home/hartko/linux-2.6-lttng NET=NETDEV SUPPORT
make KERNEL LOCATION=/home/hartko/linux-2.6-lttng NET=NO NETDEV SUPPORT

As i only wanted to measure the receive path timings with a moderate CAN
traffic source, i activated these markers in LTTng:

chardev:
C1 /mnt/debugfs/ltt/markers/kernel/irq entry/enable
C2 /mnt/debugfs/ltt/markers/fs/ioctl/enable

netdev:
N1 /mnt/debugfs/ltt/markers/kernel/irq entry/enable
N2 /mnt/debugfs/ltt/markers/net/dev receive/enable
N3 /mnt/debugfs/ltt/markers/fs/read/enable

In userspace i used 'candump' for SocketCAN and 'receivetest' from PEAK for
the chardev version - as both are finally using copy_to_user() where the
marker is located.

The results:

Measure    Average   min     max    standard deviation
C1 -> C2    112us    67us   188us       22,4us

N1 -> N3     93us    61us   160us        9,8us
N2 -> N3     26us    11us   90us         4,8us
N1 -> N2     67us    54us   92us         7,2us

As the PEAK driver used exactly the same IRQ handler code, i was quite
astonished that the netdev concept was indeed faster than the chardev!

Regarding the much higher standard deviation of the chardev my assuption is
that the FIFO in the chardev driver causes this problem, as the FIFO is filled
in IRQ-context. When the CAN frame is pulled from the FIFO the lock needs to
be irq-safe. On the netdev side the IRQ-context is only to access the
controller registers and to fill the skbuff which is handled in soft-irq
context from there.

Finally the RX path was faster with netdev and the impact of different driver
implementations or the performance of the drivers TX-handling on the SJA1000
was excluded in this benchmark.


> We have used canping [1] tool to generate CAN traffic.
> Canping uses our VCA (virtual CAN API) library, which provides common
> API for different driver back-ends. If somebody feels this tool might
> be useful for socketcan we will simplify it to have no external
> dependencies so that it can be included to socketcan/tools.


I tried to figure out how canping was used with SocketCAN in your testbed. But
i didn't find any socket() call :-)


> 1.5 2.6.26 virtual driver problem
> ----------------------------------
>
> According to
> [http://rtime.felk.cvut.cz/can/benchmark/1/by-clck/2400/rtt-virtual/2.6.26-2-686//rtt-virtual.png],
> under Debian's 2.6.26 kernel, socketcan's virtual driver introduces
> very high delays (0.5 second). We do not have explanation for this.


I don't have any explanation for this either - when it would be on the 2.6.26
only.

The vcan makes a loopback for the CAN frame in can_send() in af_can.c

So when you write a CAN frame into a raw-socket (e.g. on vcan0) this skb is
directly(!) delivered via several callback functions to the socket receive
buffers of the listening sockets.

When you kick in a bulk of CAN frames, i assume the system to read as much
data as possible (and deliver the skbs) until the stream is torn off e.g. by
scheduling the sender of the data.

And *then* the other applications begin to read the stuff that's probably
quite old (as you can see).

IMO this behaviour depends heavily on the scheduling algorithm of the system,
which may explain this measurement values especially on the 2.6.26 ... ?!?


> 1.6 Kernel 2.6.31-rc7 has very good performance
> ------------------------------------------------
>
> The graphs at
> [http://rtime.felk.cvut.cz/can/benchmark/1/kern-2.6.31-rc7.html] look
> almost the same as graphs for rt_preempt (-rt) kernels and socketcan
> often perform as fast as Lincan. It seems that there are no relevant
> changes in networking code as well as in socketcan. Therefore we guess
> this might be related to some changes in RCU and socketbuffers.


Yep!

As from my chardev vs. netdev benchmark i really felt that the network stuff
is always under surveillance of so many people in the world that need the
network stack to be as fast and performant as possible.

So with all this per-cpu locking and other fine granular locking the network
people will always help the CAN stuff to run faster also :-)

Again many thanks for your benchmarks.

If you have some slides for the eleventh Real-Time Linux Workshop i would
highly appreciate if you could send a notification.

Best regards,
Oliver

_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Re: Socketcan and LinCAN benchmarks

Michal Sojka
On Wednesday 02 of September 2009 18:37:26 Oliver Hartkopp wrote:

> Michal Sojka wrote:
> > The testbed was a Pentium 4 box with a Kvaser quad-head PCI card
> > (SJA1000 based). We have simply connected output 1 to output 2 and
> > measured the time how long it takes for the message to go forth and
> > back.
>
> The idea behind my setup was to benchmark the chardev vs netdev in general.
> For that reason, i used a PEAK PCMCIA card (which had an exclusive IRQ
> line) and compiled the same driver with and without netdev support:
>
> [...]
>
> Finally the RX path was faster with netdev and the impact of different
> driver implementations or the performance of the drivers TX-handling on the
> SJA1000 was excluded in this benchmark.

Interesting. We may try to do similar test to check whether the problem is in
PEAK's FIFOs or it is a general property (Lincan is also based on FIFOs filled
from IRQ).

> > We have used canping [1] tool to generate CAN traffic.
>
> I tried to figure out how canping was used with SocketCAN in your testbed.
> But i didn't find any socket() call :-)

In http://rtime.felk.cvut.cz/gitweb/canping.git/blob/HEAD:/src/vca_canping.c,
you can see vca_open_handle() and other vca_*() functions, which are
implemented in libVCA. For socketcan it is in
http://ocera.cvs.sourceforge.net/viewvc/ocera/ocera/components/comm/can/canvca/libvca/vca_isocketcan.c?revision=1.4&view=markup

Regards
Michal
_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Re: Socketcan and LinCAN benchmarks

Oliver Hartkopp
Michal Sojka wrote:

> On Wednesday 02 of September 2009 18:37:26 Oliver Hartkopp wrote:
>> Michal Sojka wrote:
>>> The testbed was a Pentium 4 box with a Kvaser quad-head PCI card
>>> (SJA1000 based). We have simply connected output 1 to output 2 and
>>> measured the time how long it takes for the message to go forth and
>>> back.
>> The idea behind my setup was to benchmark the chardev vs netdev in general.
>> For that reason, i used a PEAK PCMCIA card (which had an exclusive IRQ
>> line) and compiled the same driver with and without netdev support:
>>
>> [...]
>>
>> Finally the RX path was faster with netdev and the impact of different
>> driver implementations or the performance of the drivers TX-handling on the
>> SJA1000 was excluded in this benchmark.
>
> Interesting. We may try to do similar test to check whether the problem is in
> PEAK's FIFOs or it is a general property (Lincan is also based on FIFOs filled
> from IRQ).


If you're planning some more tests, i would be interested what happens if you
make the rtt-virtual tests with waits of 0/1/2 ms.

When there is a delay between the sent frames, we probably see a completely
different behaviour on the virtual CANs.


>>> We have used canping [1] tool to generate CAN traffic.
>> I tried to figure out how canping was used with SocketCAN in your testbed.
>> But i didn't find any socket() call :-)
>
> In http://rtime.felk.cvut.cz/gitweb/canping.git/blob/HEAD:/src/vca_canping.c,
> you can see vca_open_handle() and other vca_*() functions, which are
> implemented in libVCA. For socketcan it is in
> http://ocera.cvs.sourceforge.net/viewvc/ocera/ocera/components/comm/can/canvca/libvca/vca_isocketcan.c?revision=1.4&view=markup

Ah, ok.

Regards,
Oliver

_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Re: Socketcan and LinCAN benchmarks

Wolfgang Grandegger
In reply to this post by Michal Sojka
Hi Michal,

Michal Sojka wrote:
> Dear socketcan users and developers,
>
> while working on a paper for the eleventh Real-Time Linux Workshop, we
> have compared timing properties of two Linux CAN drivers (Socketcan
> and Lincan). For those who are interested in the results, they can be
> seen at [http://rtime.felk.cvut.cz/can/benchmark/1/]. See also comments
> to some testcases bellow.

Your results are interesting, indeed. Could you please describe the
kernel config, especially preemption type, of the kernel used a bit
more. And also the load scenarios.

> The main goal of this work was to find whether and how socketcan's
> integration with Linux networking layer influences negatively
> communication latencies. The results is, in brief, that there is some
> impact, but still socketcan it is suitable for most of our CAN-related
> projects.
>
> The testbed was a Pentium 4 box with a Kvaser quad-head PCI card
> (SJA1000 based). We have simply connected output 1 to output 2 and
> measured the time how long it takes for the message to go forth and
> back. We have used canping [1] tool to generate CAN traffic.

Could you be more precise how the test was performed and what interfaces
are involved (can0 and can1?). I do not understand what you mean with
"connected output 1 to output 2". You send and receive the messages on
the same system?

> Canping uses our VCA (virtual CAN API) library, which provides common
> API for different driver back-ends. If somebody feels this tool might
> be useful for socketcan we will simplify it to have no external
> dependencies so that it can be included to socketcan/tools.
>
> If somebody would like to reproduce our results all the code,
> configurations etc. is available from Git repository - see
> [http://rtime.felk.cvut.cz/gitweb/can-benchmark.git].
>
> Table of Contents
> =================
> 1 Additional comments
>     1.1 Socketcan versions
>     1.2 Graph description
>     1.3 Testbed details
>     1.4 Main result
>     1.5 2.6.26 virtual driver problem
>     1.6 Kernel 2.6.31-rc7 has very good performance
>     1.7 Related discussions
> 2 Footnotes
>
>
> 1 Additional comments
> ======================
>
> 1.1 Socketcan versions
> -----------------------
>
> All tests used socketcan from trunk at SVN revision 1009. The
> exception is the kernel 2.6.31-rc7, for which the version in mainline
> was used.
>
> 1.2 Graph description
> ----------------------
>
> To graph the results, we use so called latency profiles. It is a
> cumulative histogram with reversed vertical axis displayed in
> logarithmic scale. This way we can see the exact number of packets
> with worst latencies at the bottom right part of the graph.
>
> 1.3 Testbed details
> --------------------
>
> - Our CPU supports hyper-threading (HT); for kernels with suffix
>   maxcpus=1 (kernel command line switch), HT was not used.
> - irqbalance was not running
> - CAN baudrate was 1 Mbit/s
>
> 1.4 Main result
> ----------------
>
> [http://rtime.felk.cvut.cz/can/benchmark/1/by-clck/2400/ethflood64k/2.6.29.4-rt16%3Amaxcpus%3D1//ethflood64k.png]
>
> This graph is the main result of our testing. If a system is loaded
> simultaneously by Ethernet and CAN traffic, socketcan's worst-case
> latency is increased and cannot be lowered by any tuning (see the red
> lines).

Did you create other concurrent load like running hackbench or cache
calibrator or "while ls; do ls /bin; done" in a telnet window?

> For Lincan, if one sets high priority on IRQ thread servicing CAN
> interrupts, the worst-case latency is the same as in the unloaded case
> (see 00-rtt test).
>
> 1.5 2.6.26 virtual driver problem
> ----------------------------------
>
> According to
> [http://rtime.felk.cvut.cz/can/benchmark/1/by-clck/2400/rtt-virtual/2.6.26-2-686//rtt-virtual.png],
> under Debian's 2.6.26 kernel, socketcan's virtual driver introduces
> very high delays (0.5 second). We do not have explanation for this.
>
> 1.6 Kernel 2.6.31-rc7 has very good performance
> ------------------------------------------------
>
> The graphs at
> [http://rtime.felk.cvut.cz/can/benchmark/1/kern-2.6.31-rc7.html] look
> almost the same as graphs for rt_preempt (-rt) kernels and socketcan
> often perform as fast as Lincan. It seems that there are no relevant
> changes in networking code as well as in socketcan. Therefore we guess
> this might be related to some changes in RCU and socketbuffers.

It might also be related to your test method.

Wolfgang.
_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Re: Socketcan and LinCAN benchmarks

Michal Sojka
On Wednesday 02 of September 2009 22:14:36 Wolfgang Grandegger wrote:

> Hi Michal,
>
> Michal Sojka wrote:
> > Dear socketcan users and developers,
> >
> > while working on a paper for the eleventh Real-Time Linux Workshop, we
> > have compared timing properties of two Linux CAN drivers (Socketcan
> > and Lincan). For those who are interested in the results, they can be
> > seen at [http://rtime.felk.cvut.cz/can/benchmark/1/]. See also comments
> > to some testcases bellow.
>
> Your results are interesting, indeed. Could you please describe the
> kernel config, especially preemption type, of the kernel used a bit
> more. And also the load scenarios.

Hi Wolfgang,

I've just updated the results to include used kernel configs. What regards
preemption it was like this:
config-2.6.26-2-686:CONFIG_PREEMPT_NOTIFIERS=y
config-2.6.26-2-686:CONFIG_PREEMPT_NONE=y
config-2.6.29.4:CONFIG_PREEMPT_RCU=y
config-2.6.29.4:CONFIG_PREEMPT=y
config-2.6.29.4:CONFIG_DEBUG_PREEMPT=y
config-2.6.29.4-rt16:CONFIG_PREEMPT_RCU=y
config-2.6.29.4-rt16:CONFIG_PREEMPT_RT=y
config-2.6.29.4-rt16:CONFIG_PREEMPT=y
config-2.6.29.4-rt16:CONFIG_PREEMPT_SOFTIRQS=y
config-2.6.29.4-rt16:CONFIG_PREEMPT_HARDIRQS=y
config-2.6.29.4-rt16:CONFIG_DEBUG_PREEMPT=y
config-2.6.31-rc7:CONFIG_PREEMPT_RCU=y
config-2.6.31-rc7:CONFIG_PREEMPT=y
config-2.6.31-rc7:CONFIG_DEBUG_PREEMPT=y

The test scenarios can be seend from tescase sources (available from pages as
well). In general ethernet load was generated from another computer on the
same network. Different testcases used different load:
ethflood: ping -f <computer with can card>
ethflood64k: ping -f -s 64 <computer with can card>
ethload: computer with can card run: ssh <another host> 'find -L \
/usr/src/linux -type f -exec cat "{}" ";"'

>
> > The main goal of this work was to find whether and how socketcan's
> > integration with Linux networking layer influences negatively
> > communication latencies. The results is, in brief, that there is some
> > impact, but still socketcan it is suitable for most of our CAN-related
> > projects.
> >
> > The testbed was a Pentium 4 box with a Kvaser quad-head PCI card
> > (SJA1000 based). We have simply connected output 1 to output 2 and
> > measured the time how long it takes for the message to go forth and
> > back. We have used canping [1] tool to generate CAN traffic.
>
> Could you be more precise how the test was performed and what interfaces
> are involved (can0 and can1?). I do not understand what you mean with
> "connected output 1 to output 2". You send and receive the messages on
> the same system?

Yes, messages are send and received on the same system. Therefore the measured
time comprises 2*(send overhead + message transmission + receive
overhead)+canping reply generation overhead.

>
> >
> > 1.4 Main result
> > ----------------
> >
> > [http://rtime.felk.cvut.cz/can/benchmark/1/by-clck/2400/ethflood64k/2.6.2
> >9.4-rt16%3Amaxcpus%3D1//ethflood64k.png]
> >
> > This graph is the main result of our testing. If a system is loaded
> > simultaneously by Ethernet and CAN traffic, socketcan's worst-case
> > latency is increased and cannot be lowered by any tuning (see the red
> > lines).
>
> Did you create other concurrent load like running hackbench or cache
> calibrator or "while ls; do ls /bin; done" in a telnet window?

We didn't load specifically the CPU. Only canpings and in some tests ssh
client were running in the system.

> > 1.6 Kernel 2.6.31-rc7 has very good performance
> > ------------------------------------------------
> >
> > The graphs at
> > [http://rtime.felk.cvut.cz/can/benchmark/1/kern-2.6.31-rc7.html] look
> > almost the same as graphs for rt_preempt (-rt) kernels and socketcan
> > often perform as fast as Lincan. It seems that there are no relevant
> > changes in networking code as well as in socketcan. Therefore we guess
> > this might be related to some changes in RCU and socketbuffers.
>
> It might also be related to your test method.

Of course it can, but there are definitely differences from other kernels.
Maybe, there is some different settings in kernel's .config. I try to evaluate
the test with .config based on the one used for different kernel.

Michal
_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Re: Socketcan and LinCAN benchmarks

Michal Sojka
In reply to this post by Oliver Hartkopp
On Wednesday 02 of September 2009 19:05:44 Oliver Hartkopp wrote:

> Michal Sojka wrote:
> > On Wednesday 02 of September 2009 18:37:26 Oliver Hartkopp wrote:
> >
> > Interesting. We may try to do similar test to check whether the problem
> > is in PEAK's FIFOs or it is a general property (Lincan is also based on
> > FIFOs filled from IRQ).
>
> If you're planning some more tests, i would be interested what happens if
> you make the rtt-virtual tests with waits of 0/1/2 ms.
>
> When there is a delay between the sent frames, we probably see a completely
> different behaviour on the virtual CANs.

I'm not sure about this. The delay is actually not the delay between sends but
the delay between receiving response and sending the next message. The title
of some graphs is not exact. Sorry.

Michal
_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Re: Socketcan and LinCAN benchmarks

Wolfgang Grandegger
In reply to this post by Michal Sojka
Michal Sojka wrote:

> On Wednesday 02 of September 2009 22:14:36 Wolfgang Grandegger wrote:
>> Hi Michal,
>>
>> Michal Sojka wrote:
>>> Dear socketcan users and developers,
>>>
>>> while working on a paper for the eleventh Real-Time Linux Workshop, we
>>> have compared timing properties of two Linux CAN drivers (Socketcan
>>> and Lincan). For those who are interested in the results, they can be
>>> seen at [http://rtime.felk.cvut.cz/can/benchmark/1/]. See also comments
>>> to some testcases bellow.
>> Your results are interesting, indeed. Could you please describe the
>> kernel config, especially preemption type, of the kernel used a bit
>> more. And also the load scenarios.
>
> Hi Wolfgang,
>
> I've just updated the results to include used kernel configs. What regards
> preemption it was like this:
> config-2.6.26-2-686:CONFIG_PREEMPT_NOTIFIERS=y
> config-2.6.26-2-686:CONFIG_PREEMPT_NONE=y
> config-2.6.29.4:CONFIG_PREEMPT_RCU=y
> config-2.6.29.4:CONFIG_PREEMPT=y
> config-2.6.29.4:CONFIG_DEBUG_PREEMPT=y
> config-2.6.29.4-rt16:CONFIG_PREEMPT_RCU=y
> config-2.6.29.4-rt16:CONFIG_PREEMPT_RT=y
> config-2.6.29.4-rt16:CONFIG_PREEMPT=y
> config-2.6.29.4-rt16:CONFIG_PREEMPT_SOFTIRQS=y
> config-2.6.29.4-rt16:CONFIG_PREEMPT_HARDIRQS=y
> config-2.6.29.4-rt16:CONFIG_DEBUG_PREEMPT=y
> config-2.6.31-rc7:CONFIG_PREEMPT_RCU=y
> config-2.6.31-rc7:CONFIG_PREEMPT=y
> config-2.6.31-rc7:CONFIG_DEBUG_PREEMPT=y

OK, CONFIG_PREEMPT was enabled. BTW: at what rate do you send the messages.

> The test scenarios can be seend from tescase sources (available from pages as
> well). In general ethernet load was generated from another computer on the
> same network. Different testcases used different load:
> ethflood: ping -f <computer with can card>
> ethflood64k: ping -f -s 64 <computer with can card>
> ethload: computer with can card run: ssh <another host> 'find -L \
> /usr/src/linux -type f -exec cat "{}" ";"'
>
>>> The main goal of this work was to find whether and how socketcan's
>>> integration with Linux networking layer influences negatively
>>> communication latencies. The results is, in brief, that there is some
>>> impact, but still socketcan it is suitable for most of our CAN-related
>>> projects.
>>>
>>> The testbed was a Pentium 4 box with a Kvaser quad-head PCI card
>>> (SJA1000 based). We have simply connected output 1 to output 2 and
>>> measured the time how long it takes for the message to go forth and
>>> back. We have used canping [1] tool to generate CAN traffic.
>> Could you be more precise how the test was performed and what interfaces
>> are involved (can0 and can1?). I do not understand what you mean with
>> "connected output 1 to output 2". You send and receive the messages on
>> the same system?
>
> Yes, messages are send and received on the same system. Therefore the measured
> time comprises 2*(send overhead + message transmission + receive
> overhead)+canping reply generation overhead.

My criticism here is that send and receive are related in time
(synchronous). I think you will get different latency results if you
measure the round-trip time initiated by an external system. The test
target just receives the message and sends it back.

>>> 1.4 Main result
>>> ----------------
>>>
>>> [http://rtime.felk.cvut.cz/can/benchmark/1/by-clck/2400/ethflood64k/2.6.2
>>> 9.4-rt16%3Amaxcpus%3D1//ethflood64k.png]
>>>
>>> This graph is the main result of our testing. If a system is loaded
>>> simultaneously by Ethernet and CAN traffic, socketcan's worst-case
>>> latency is increased and cannot be lowered by any tuning (see the red
>>> lines).
>> Did you create other concurrent load like running hackbench or cache
>> calibrator or "while ls; do ls /bin; done" in a telnet window?
>
> We didn't load specifically the CPU. Only canpings and in some tests ssh
> client were running in the system.

But then you cannot speak about worst case latencies. I'm quite sure
that your latencies increase under the load described above.

>>> 1.6 Kernel 2.6.31-rc7 has very good performance
>>> ------------------------------------------------
>>>
>>> The graphs at
>>> [http://rtime.felk.cvut.cz/can/benchmark/1/kern-2.6.31-rc7.html] look
>>> almost the same as graphs for rt_preempt (-rt) kernels and socketcan
>>> often perform as fast as Lincan. It seems that there are no relevant
>>> changes in networking code as well as in socketcan. Therefore we guess
>>> this might be related to some changes in RCU and socketbuffers.
>> It might also be related to your test method.
>
> Of course it can, but there are definitely differences from other kernels.
> Maybe, there is some different settings in kernel's .config. I try to evaluate
> the test with .config based on the one used for different kernel.

With "test method" I do not mean the kernel configuration but the way
you perform the test. See above.

Wolfgang.
_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Re: Socketcan and LinCAN benchmarks

Michal Sojka
On Thursday 03 of September 2009 14:01:49 Wolfgang Grandegger wrote:

> Michal Sojka wrote:
> >> Your results are interesting, indeed. Could you please describe the
> >> kernel config, especially preemption type, of the kernel used a bit
> >> more. And also the load scenarios.
> >
> > Hi Wolfgang,
> >
> > I've just updated the results to include used kernel configs. What
> > regards preemption it was like this:
> > config-2.6.26-2-686:CONFIG_PREEMPT_NOTIFIERS=y
> > config-2.6.26-2-686:CONFIG_PREEMPT_NONE=y
> > config-2.6.29.4:CONFIG_PREEMPT_RCU=y
> > config-2.6.29.4:CONFIG_PREEMPT=y
> > config-2.6.29.4:CONFIG_DEBUG_PREEMPT=y
> > config-2.6.29.4-rt16:CONFIG_PREEMPT_RCU=y
> > config-2.6.29.4-rt16:CONFIG_PREEMPT_RT=y
> > config-2.6.29.4-rt16:CONFIG_PREEMPT=y
> > config-2.6.29.4-rt16:CONFIG_PREEMPT_SOFTIRQS=y
> > config-2.6.29.4-rt16:CONFIG_PREEMPT_HARDIRQS=y
> > config-2.6.29.4-rt16:CONFIG_DEBUG_PREEMPT=y
> > config-2.6.31-rc7:CONFIG_PREEMPT_RCU=y
> > config-2.6.31-rc7:CONFIG_PREEMPT=y
> > config-2.6.31-rc7:CONFIG_DEBUG_PREEMPT=y
>
> OK, CONFIG_PREEMPT was enabled. BTW: at what rate do you send the messages.

For tests without -w option (most of them) this was as fast as possible i.e.
whenever a response is received a next ping is sent. The test called rtt-w was
done to check whether the delay between reception and sending has some effect
on the results.

> >>> The testbed was a Pentium 4 box with a Kvaser quad-head PCI card
> >>> (SJA1000 based). We have simply connected output 1 to output 2 and
> >>> measured the time how long it takes for the message to go forth and
> >>> back. We have used canping [1] tool to generate CAN traffic.
> >>
> >> Could you be more precise how the test was performed and what interfaces
> >> are involved (can0 and can1?). I do not understand what you mean with
> >> "connected output 1 to output 2". You send and receive the messages on
> >> the same system?
> >
> > Yes, messages are send and received on the same system. Therefore the
> > measured time comprises 2*(send overhead + message transmission + receive
> > overhead)+canping reply generation overhead.
>
> My criticism here is that send and receive are related in time
> (synchronous). I think you will get different latency results if you
> measure the round-trip time initiated by an external system. The test
> target just receives the message and sends it back.

I agree. There will be an overhead in context switching between master and
slave canping processes. But the goal was to compare two different drivers,
under the same conditions which, I think, was done in our tests. Do you know
of another source of latencies in our tests?

>
> >>> 1.6 Kernel 2.6.31-rc7 has very good performance
> >>> ------------------------------------------------
> >>>
> >>> The graphs at
> >>> [http://rtime.felk.cvut.cz/can/benchmark/1/kern-2.6.31-rc7.html] look
> >>> almost the same as graphs for rt_preempt (-rt) kernels and socketcan
> >>> often perform as fast as Lincan. It seems that there are no relevant
> >>> changes in networking code as well as in socketcan. Therefore we guess
> >>> this might be related to some changes in RCU and socketbuffers.
> >>
> >> It might also be related to your test method.
> >
> > Of course it can, but there are definitely differences from other
> > kernels. Maybe, there is some different settings in kernel's .config. I
> > try to evaluate the test with .config based on the one used for different
> > kernel.
>
> With "test method" I do not mean the kernel configuration but the way
> you perform the test. See above.

I understood you well, but I realized, that I configured the .31-rc7 kernel
differently from the previously tested kernels, so I wanted to say that the
difference in performance might also be caused by this.

I turned out to be true. I configured the kernel by "make oldconfig" with
.config from 2.6.29.4 and the results are worse then before. You can see it at
http://rtime.felk.cvut.cz/can/benchmark/1/kern-2.6.31-rc7-2.html and compare
it with
http://rtime.felk.cvut.cz/can/benchmark/1/kern-2.6.31-rc7.html.

Michal
_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Re: [Ocera-development] Socketcan and LinCAN benchmarks

Michal Sojka
In reply to this post by Michal Sojka
On Thursday 03 of September 2009 13:30:45 Michal Sojka wrote:

> On Wednesday 02 of September 2009 19:05:44 Oliver Hartkopp wrote:
> > If you're planning some more tests, i would be interested what happens if
> > you make the rtt-virtual tests with waits of 0/1/2 ms.
> >
> > When there is a delay between the sent frames, we probably see a
> > completely different behaviour on the virtual CANs.
>
> I'm not sure about this. The delay is actually not the delay between sends
> but the delay between receiving response and sending the next message. The
> title of some graphs is not exact. Sorry.

I tried to run rtt-virtual with different delays on a differently configured
2.6.31-rc7 and it seems, there are a some strange latencies in almost all cases.

http://rtime.felk.cvut.cz/can/benchmark/1/by-clck/2400/rtt-virtual/2.6.31-rc7-2//rtt-virtual.png

I try to find time and compare the relevant changes in config-2.6.31-rc7-2
and config-2.6.31-rc7 which caused the differences in this and even the other tests.

Bye
Michal
_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Re: Socketcan and LinCAN benchmarks

Wolfgang Grandegger
In reply to this post by Michal Sojka
Hi Michal,

Michal Sojka wrote:

> On Thursday 03 of September 2009 14:01:49 Wolfgang Grandegger wrote:
>> Michal Sojka wrote:
>>>> Your results are interesting, indeed. Could you please describe the
>>>> kernel config, especially preemption type, of the kernel used a bit
>>>> more. And also the load scenarios.
>>> Hi Wolfgang,
>>>
>>> I've just updated the results to include used kernel configs. What
>>> regards preemption it was like this:
>>> config-2.6.26-2-686:CONFIG_PREEMPT_NOTIFIERS=y
>>> config-2.6.26-2-686:CONFIG_PREEMPT_NONE=y
>>> config-2.6.29.4:CONFIG_PREEMPT_RCU=y
>>> config-2.6.29.4:CONFIG_PREEMPT=y
>>> config-2.6.29.4:CONFIG_DEBUG_PREEMPT=y
>>> config-2.6.29.4-rt16:CONFIG_PREEMPT_RCU=y
>>> config-2.6.29.4-rt16:CONFIG_PREEMPT_RT=y
>>> config-2.6.29.4-rt16:CONFIG_PREEMPT=y
>>> config-2.6.29.4-rt16:CONFIG_PREEMPT_SOFTIRQS=y
>>> config-2.6.29.4-rt16:CONFIG_PREEMPT_HARDIRQS=y
>>> config-2.6.29.4-rt16:CONFIG_DEBUG_PREEMPT=y
>>> config-2.6.31-rc7:CONFIG_PREEMPT_RCU=y
>>> config-2.6.31-rc7:CONFIG_PREEMPT=y
>>> config-2.6.31-rc7:CONFIG_DEBUG_PREEMPT=y
>> OK, CONFIG_PREEMPT was enabled. BTW: at what rate do you send the messages.
>
> For tests without -w option (most of them) this was as fast as possible i.e.
> whenever a response is received a next ping is sent. The test called rtt-w was
> done to check whether the delay between reception and sending has some effect
> on the results.

OK.

>
>>>>> The testbed was a Pentium 4 box with a Kvaser quad-head PCI card
>>>>> (SJA1000 based). We have simply connected output 1 to output 2 and
>>>>> measured the time how long it takes for the message to go forth and
>>>>> back. We have used canping [1] tool to generate CAN traffic.
>>>> Could you be more precise how the test was performed and what interfaces
>>>> are involved (can0 and can1?). I do not understand what you mean with
>>>> "connected output 1 to output 2". You send and receive the messages on
>>>> the same system?
>>> Yes, messages are send and received on the same system. Therefore the
>>> measured time comprises 2*(send overhead + message transmission + receive
>>> overhead)+canping reply generation overhead.
>> My criticism here is that send and receive are related in time
>> (synchronous). I think you will get different latency results if you
>> measure the round-trip time initiated by an external system. The test
>> target just receives the message and sends it back.
>
> I agree. There will be an overhead in context switching between master and
> slave canping processes. But the goal was to compare two different drivers,
> under the same conditions which, I think, was done in our tests. Do you know
> of another source of latencies in our tests?

OK, it's fine if you just want to compare the overhead of the two
drivers. But the worst case latency (jitter of the rtt) is a interesting
figure as well and you need more load and maybe also more time to get
measure it. Then also a difference between -rt and vanilla should show
up. Furthermore, I expect that the char driver has the better real-time
behavior, in case it does not use dynamic memory allocation.

Wolfgang.


_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Re: Socketcan and LinCAN benchmarks

Oliver Hartkopp
Wolfgang Grandegger wrote:

> Furthermore, I expect that the char driver has the better real-time
> behavior, in case it does not use dynamic memory allocation.

I also expected that, but i was really astonished that it was the other way
round when i made some receive benchmark with a PEAK PCMCIA card with and
without netdev support:

https://lists.berlios.de/pipermail/socketcan-users/2009-September/001036.html

Regards,
Oliver

_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Re: Socketcan and LinCAN benchmarks

kedar
In reply to this post by Oliver Hartkopp

Hi ,

I am new to can i wrote a socketcan driver for my hardware .But still i
didn't clearly understand the flow from userspace to kernel space and form
there to hardware please explain me the flow.Waiting for ur response.my mail
id is:[hidden email]

Thanks,
Kedar.
--
View this message in context: http://old.nabble.com/Socketcan-and-LinCAN-benchmarks-tp25255037p35346258.html
Sent from the Socket-CAN Users mailing list archive at Nabble.com.

_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Re: Socketcan and LinCAN benchmarks

kedar
In reply to this post by Oliver Hartkopp

Hi ,

I am new to can i wrote a socketcan driver for my hardware .But still i
didn't clearly understand the flow from userspace to kernel space and form
there to hardware please explain me the flow.Waiting for ur response.my mail
id is:[hidden email]

--
View this message in context: http://old.nabble.com/Socketcan-and-LinCAN-benchmarks-tp25255037p35346261.html
Sent from the Socket-CAN Users mailing list archive at Nabble.com.

_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Re: Socketcan and LinCAN benchmarks

kedar
In reply to this post by Oliver Hartkopp




I am new to can i wrote a socketcan driver for my hardware .But still i
didn't clearly understand the flow from userspace to kernel space and form
there to hardware please explain me the flow.Waiting for ur response.my mail
id is:[hidden email]

--
View this message in context: http://old.nabble.com/Socketcan-and-LinCAN-benchmarks-tp25255037p35346262.html
Sent from the Socket-CAN Users mailing list archive at Nabble.com.

_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Re: Socketcan and LinCAN benchmarks

kedar
In reply to this post by Oliver Hartkopp




I am new to can i wrote a socketcan driver for my hardware .But still i
didn't clearly understand the flow from userspace to kernel space and form
there to hardware please explain me the flow.Waiting for ur response.my mail
id is:[hidden email]

--
View this message in context: http://old.nabble.com/Socketcan-and-LinCAN-benchmarks-tp25255037p35346264.html
Sent from the Socket-CAN Users mailing list archive at Nabble.com.

_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users
Reply | Threaded
Open this post in threaded view
|

Socketcan and LinCAN benchmarks

kedar
In reply to this post by Oliver Hartkopp




I am new to can i wrote a socketcan driver for my hardware .But still i
didn't clearly understand the flow from userspace to kernel space and form
there to hardware please explain me the flow.Waiting for ur response.my mail
id is:[hidden email]

--
View this message in context: http://old.nabble.com/Socketcan-and-LinCAN-benchmarks-tp25255037p35346270.html
Sent from the Socket-CAN Users mailing list archive at Nabble.com.

_______________________________________________
Socketcan-users mailing list
[hidden email]
https://lists.berlios.de/mailman/listinfo/socketcan-users