Jump to content
OMRON Forums

A Tale of Xenomai mode switches


smr99
 Share

Recommended Posts

Our system consists of several C-language servos, a C-language RTI plc, and one multithreaded “Background” C program that functions as a bridge to our HMI running on a different machine. This bridge program does logging to disk and communicates over a socket to our HMI to receive commands and send status. The bridge communicates with the RTI code using pushm shared memory and two FIFOs. The two FIFOs are used for the RTI to send data streams that are logged to disk: they are written in the RTI (non-blocking, discarding a write if the FIFO is full) and read in the bridge. The two FIFOs contain information to be logged for troubleshooting; one contains motor and servo values at each RTI cycle, the other contains other asynchronous "events" such as detected faults.

 

The IDE-generated makefile wraps many of the posix calls to call xenomai-provided functions. Since pthread_create() is wrapped, all the threads in the bridge end up as xenomai threads. In order to mediate access to shared data, semaphores and mutexes are used; these are also wrapped so we end up using xenomai ("primary mode") calls for these. On the other hand, the I/O is done on standard Linux file descriptors (socket, disk files, and the FIFOs) and handled in the linux ("secondary mode"). As a consequence, I can see each of these threads making 100-200 mode switches per second, correlated with the job each is doing. For example, one thread reads a FIFO that is written at 250 Hz by the RTI and does 250 mode switches / second.

 

Ocasionally, the bridge program will lock up. This was ultimately traced to these mode switches: the process was stuck in a call to xnshadow_harden(). The current Delta Tau build uses xenomai 2.5.6 and indications are that there are a lot of mode-switching bugs in this version: http://www.xenomai.org/pipermail/xenomai/2014-June/031120.html Thus it seems prudent to reduce the mode switches. For the background program, we can arrange to avoid the wrapping and this indeed avoids creating xenomai threads and avoids all mode switching.

 

However, we are still faced with mode switches in the RTI itself. The RTI is by necessity a xenomai thread and, due to calling write() on the FIFO, suffers a mode switch each time it runs. So we are now considering how to replace the use of FIFO with something that incurs no mode switching.

 

In short: we want to write in the xenomai thread and read in a linux thread with no mode switching on either end. Is there an out-of-the-box solution I've overlooked? Since we have a shared memory I can imagine using a lock-free queue, though I haven't yet looked around for an implementation. Ideas? I'd like to hear how others have solved a similar problem of getting log data from the real time to a background process.

 

Incidentally, sometimes the system will incur a hard watchdog. My hypothesis at this point is that the same bug that bites us in mode switching the background process may be biting during an RTI mode switch causing the RTI to hang, not be able to update the hardware watchdog counter, and boom! Anyone have thoughts on this?

Link to comment
Share on other sites

  • Replies 8
  • Created
  • Last Reply

Top Posters In This Topic

For what it's worth, my company is currently doing something very similar (e.g. logging data in a Xenomai thread and then using a background thread to process the data). We ended up mmap'ing a block of memory and sharing it between the two threads, and using this method we do not incur any mode switches except for the first mmap() on powerup.

 

If you want a more out-of-the-box solution (a la Delta Tau), you could use pushm memory as a shared buffer. However, you do still have to come up with a scheme for how data is written to memory and then interpreted on the other end, but that could be simple depending on your implementation.

Link to comment
Share on other sites

smr99: I just saw your post on the Xenomai mailing list.

 

Have you checked out RTDM IPC like Philippe suggested? It looks like it fits your requirements pretty well. Although the 2.5.3 API docs don't list any IPC functions for some reason, it looks like they are all there. If you installed the DT IDE, the source code can be found at:

 

C:\Program Files (x86)\Delta Tau Data Systems Inc\2.0\Power PMAC Suite\powerpc-460-linux-gnu\opt\eldk-4.2\debian_rootfs\usr\src\xenomai-2.5.6\ksrc\drivers\ipc

 

Also, here is the 2.6 API for IPC (which includes all the relevant functions):

 

http://www.xenomai.org/documentation/trunk/html/api/group__rtipc.html

Link to comment
Share on other sites

OK scratch that last, the IPC functions might NOT be there in 2.5.6.

 

I'm going to try to compile the IPC functionality from 2.6.3 into a module that will work under 2.5.6. Not sure if that will be too much work or not, but I'd also like to have some form of real-time IPC for Power PMAC projects.

Link to comment
Share on other sites

  • 10 months later...

I noticed that this was an old thread but has there been any progress on this?

 

I am having some intermittent non responsiveness in an application I am testing. It would lock up after an hour or two. I've traced it down to a very simple dummy function that got called within a thread that basically just slept for 300ms and then wrote a debug message out to a socket (which generates a switch to xenomai secondary mode...). I used the delta tau supplied "send" function for tracing where it hung and it pointed toward the above mentioned function never returning. I started suspecting the socket write immediately after that.

 

I commented out the socket write and the application ran continuously for about 24 hours. Unfortunately I put the write to the socket back in the program and recompiled and now it doesn't seem to be hanging up after several hours! I'll let it run through the night. I'm afraid that some other slight code change (additional debug code, etc) may be allowing the thread not to hang. I've seen this sort of thing before.

 

I have always wanted to delegate the Ethernet messages out to a non-realtime task and use a RTOS FIFO (Like back in the days of RTAI and RTLinux FIFOs ...) but I have had to put this on the back burner because... well... mode switching was easier when I first wrote the code and it just seemed to work. Now I am growing weary that these mode switches in my code could result in very intermittent instability such as I am seeing now.

 

Have you had any luck creating realtime FIFOs to do clean streaming between primary and secondary threads? I feel like I'm going to be digging into this soon and any groundwork would help.

 

Thanks,

KEJR

Link to comment
Share on other sites

  • 3 months later...

I also would like to use realtime FIFOs in the next months to manage errors and to do some logging. Realtime because i want it to run also in the Deltatau RTI CPLC.

Do you guys found a solution to your "mode switches" issues? Which libraries/functions do you use?

 

I currently use shared variable (P and Q variables) bitmasking to transmit applications errors. It is heavy to manage.

I would like to improve my error management and found that using a queue would be very flexible.

I would be very nice of you if you could share your methods, habits, in application errors transmission and logging.

 

Thank you!

Anthony

Link to comment
Share on other sites

Anthony,

 

I haven't done any work with message queues yet. You could try to call one of the functions and see if the IDE environment will be able to access and compile it.

 

I'm still switching to secondary mode when we get an error and accessing the socket directly. The model works in my situation because when an error is sent to our HMI it requires a user response which by nature is not realtime :o) Once our error is acknowledged the thread switches back into primary mode and goes realtime.

 

If you are sending out data in the RTI function Primary/Secondary isn't an option I believe because this is programmed in kernel space. Have you tried just using the send() function and running the getsends program at the shell with its output redirected to a file? This might be a good experiment and if it works you can write a C program to sqawn a getsends thread and get its stdio to send to a file or over a socket programatically, etc. I'd still prefer a FIFO but I am pretty sure send() will work if the other attempts fail.

Link to comment
Share on other sites

  • 1 month later...

It seems I can compile C++ code, linking against libppmac and libxenomai, and eliminating all the __wrap_xxxx directives from the Makefile without problems. It's libppmac.so that has undefined calls to __wrap_xxxx functions, requiring libxenomai (and friends), so when not wrapping at the linker level all other source files/shared libs should use the standard Linux/Posix calls.

 

When I do this, it appears that only the thread calling InitLibrary() is listed in /proc/xenomai/stat and sched as I would hope. Other threads are normal Linux threads and the xenomai mode switch counters stay at 1 each while I read/write shared memory and call normal Linux (non-wrapped) functions.

 

As I get this functionality built into a larger test application I may yet encounter problems with this approach -- I'll update here with interesting results. I do expect that if I need to change the xenomai thread priority, or make xenomai FIFO's, etc I'll need to call out the __wrap functions explicitly in my code. (These functions can be listed from the shared objects in the /usr/local/xenomai/lib folder, I use 'nm *.so | grep wrap')

 

- Luke

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
 Share


×
×
  • Create New...