Jump to content
OMRON Forums

shansen

Members
  • Posts

    186
  • Joined

  • Last visited

  • Days Won

    1

shansen last won the day on October 28 2022

shansen had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

shansen's Achievements

Collaborator

Collaborator (7/14)

  • Conversation Starter
  • First Post
  • Collaborator Rare
  • Week One Done
  • One Month Later

Recent Badges

2

Reputation

  1. J0hann - this drove me nuts too. When my company updated firmware I made some changes to the header files to prevent the warnings. I didn't do the greatest job documenting the changes I made, but I did write down the following: In libopener/typdefs.h move the "#include " inside the !defined(__KERNEL__) statement. Here is a snippet from typedefs.h showing the change: #ifndef OPENER_TYPEDEFS_H_ #define OPENER_TYPEDEFS_H_ #if !defined(__KERNEL__) #include #include #else #include #include #endif #include /** @file typedefs.h I am not using the IDE, but I think the IDE auto-downloads all header files from the Power PMAC so you may have to make the change both on the Power PMAC and on your development computer.
  2. If you place the file in the ftp top level directory it should be located in /var/ftp. So pass the full file path to fopen: fp = fopen("/var/ftp/test.txt", "r");
  3. Alex: This is definitely possible; a while back we wrote an Ethernet server that ran with <500us (+/- jitter) between packets. It was written as a background C application. We used an architecture based on select() to poll for sockets that are active. Once a socket received data it was added to the set via select() and then we used recvfrom() to read the data and return a response. Some optimization may be required to ensure that parsing the packet doesn't take too long depending on what you are doing.
  4. I'd bet this is because your Capp links against the Xenomai scheduler (by default because libppmac links against the Xenomai scheduler). When the breakpoint is hit, the scheduler hangs and you get a hard watchdog. Before starting gdb go into gpascii and issue "Sys.DebugMode=$AAAAAAAA". This will let the RTI thread continue to toggle the watchdog so the Xenomai scheduler doesn't hang when you hit a breakpoint.
  5. Unsure of how to do this via IDE. I don't think there is an easy way to do this, but it wouldn't be too bad to compile your shared library for each Power PMAC platform (there are only a handful). If you are comfortable with a custom makefile, you could write one that will automatically compile for all targets each time you compile. Then you can provide the compiled files to your customers along with an API and they can link against these libraries when they compile. Although I am not sure if that is what you mean by "unreadable". A lot of implementation details can still be dumped from a compiled binary, but it will be in assembly (unless they're using a fancy reverse engineering tool).
  6. Off the top of my head I am not sure. I usually avoid mutexes between RT and non-RT code because it can cause jitter on the RT side. My guess is that rt_mutex_* calls should be used for both your RT and non-RT task, but this will switch your non-RT thread to primary mode each time you call acquire() or release(). I don't know of another way to prevent mode switches other than using a shared memory approach. Typically mode switches aren't that big of a deal for non-RT threads because those threads aren't deterministic regardless. It is the RT thread that you want to ensure doesn't have any mode switches. Running your RT code in kernel mode is only required if you really need low jitter. In rough numbers, I've found that running RT code in kernel mode results in ~10us of jitter compared to ~200us of jitter in user mode (benchmarked on the 460EX). But you should be able to allocate shared memory in either task and share it with the other task without issue if you are using the Xenomai shm api (rt_heap_create, etc).
  7. I would be more responsive but have been buried working on other projects! Calling rt_task_create does create a thread scheduled using the Xenomai scheduler and therefore this thread will incur mode switches if you use syscalls within that context. Based on my understanding of what you are trying to achieve, you should have one thread created with rt_task_create that runs your deterministic code, and one thread created using pthread_create that acts as a background thread for handling communications. Are you using the Delta Tau IDE? I haven't used the IDE in years so bear with me, but one gotcha is that the when the IDE generates Makefiles it automatically wraps pthread_create by the Xenomai equivalent (or at least it used to). This means that if you call pthread_create, the linker will actually link to Xenomai's pthread_create_rt() instead of pthread_create() (making the thread scheduled by Xenomai instead of the Linux scheduler). This could be the source of your problems. Check the Makefile for your communications program and verify that there isn't a "--wrap,pthread_create" command passed to the linker. The realtime Xenomai posix functions are located in libpthread_rt, so your makefile will probably have a "-lpthread_rt" arg passed to the linker as well. Even with wrapping, this can still be made to work. What you will see is that the first time your background (communications) thread calls a send() function, the thread will switch to secondary mode. But it shouldn't switch back to primary on its own unless you call a Xenomai realtime function, so it should stay in secondary mode for the rest of its lifetime. What makes it tricky is that the IDE might wrap other functions too, so check the makefile to figure out if you are calling any other functions wrapped by Xenomai functions and make sure to avoid them (or remove the wrapping altogether if you are comfortable using a custom makefile). Wraps in the makefile look like this: WRAP := -Wl,--wrap,clock_getres \ -Wl,--wrap,clock_gettime \ -Wl,--wrap,clock_settime \ -Wl,--wrap,clock_nanosleep \ -Wl,--wrap,nanosleep
  8. Interesting, I haven't seen that problem before. But I also typically avoid using memcpy within realtime threads due to performance issues (can cause excessive context switches due to cache misses). OK, I just set up a quick test and added a memcpy of 10k bytes to and from DT shared memory while in kernel space and it doesn't seem to result in mode switches. Is it possible that you are allocating your shared memory in user space? I believe when you are sharing memory between kernel and user space you need to allocate in kernel space first using kmalloc. Here is one of my routines that allocates shm for this type of application: Kernel Space code: #define MODULE_NAME "some_module" // will be created as /dev/some_module static struct file_operations fops = { .owner = THIS_MODULE, .mmap = vdb_mmap, .unlocked_ioctl = NULL }; int vdb_mmap( struct file *filp, struct vm_area_struct *vma ) { int err = 0; vma->vm_ops = &mmap_ops; vma->vm_flags |= VM_RESERVED; vma->vm_flags |= VM_SHARED; vma->vm_flags |= VM_LOCKED; vma->vm_private_data = filp->private_data; err = remap_pfn_range(vma, vma->vm_start, virt_to_phys(vdb_memory) >> PAGE_SHIFT, VDB_SHARED_MEMORY_SIZE, vma->vm_page_prot); if( err < 0 ) return -EAGAIN; return 0; } static int init_and_allocate_memory( void ) { uint32 size; if( alloc_chrdev_region(&version, 0, 1, MODULE_NAME) < 0 ) { printk(KERN_ERR "%s ERROR: Invalid device version.\n", MODULE_NAME); return -EAGAIN; } driver = cdev_alloc(); driver->owner = THIS_MODULE; driver->ops = &fops; if( cdev_add(driver, version, 1) < 0 ) { printk(KERN_ERR "%s ERROR: Could not create a device.\n", MODULE_NAME); return -EAGAIN; } if( sizeof(vdb_t) >= VDB_SHARED_MEMORY_SIZE ) { printk(KERN_ERR "%s.ko ERROR: VDB size of %u bytes is greater than shared memory size of %lu bytes.\n", MODULE_NAME, sizeof(vdb_t), VDB_SHARED_MEMORY_SIZE); return -ENOMEM; } vdb_memory = kmalloc(VDB_SHARED_MEMORY_SIZE, GFP_KERNEL); if( !vdb_memory ) { printk(KERN_ERR "%s.ko ERROR: Could not allocate VDB memory.\n", MODULE_NAME); return -ENOMEM; } memset((void *) vdb_memory, 0, VDB_SHARED_MEMORY_SIZE); return 0; } User Space code: int32 vdb_open_database( void ) { int err = 0; fd = open("/dev/some_module", O_RDWR); if( fd < 0 ) { perror("Could not open /dev/some_module: "); return -1; } sharedmem = mmap(0, VDB_SHARED_MEMORY_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); if( sharedmem == MAP_FAILED || !sharedmem ) { close(fd); perror("Could not mmap vdb memory via APC firmware: "); return -1; } if( !is_aligned(sharedmem) ) { munmap(sharedmem, VDB_SHARED_MEMORY_SIZE); close(fd); printf("libvdb ERROR: mmap'd shared memory was not 4 byte aligned.\n"); return -1; } return err; } I did some hack-y copying and pasting there so hopefully I got everything you need. Essentially the steps are (assuming you are using a char driver in kernel space for your implementation): 1) Create cdev in kernel space 2) Allocate memory with kmalloc in kernel space 3) Setup kernel module for mmap access 4) In user mode use mmap with /dev/your_char_device to get mmap'd access
  9. rvanderbijl: Each thread is run separately and is either scheduled by the real-time kernel (Xenomai) or by Linux (background, non-rt). So it is assigned at a per-thread basis and you cannot mix realtime and non-realtime threads. I have some limited experience with the dual core CPU, but probably not enough to help with your cpu affinity issue. I know that DT has configured the OS to run in SMP mode so Xenomai schedules threads to run on both CPUs. The end result is that Xenomai should be automatically selecting the best core for each thread and you shouldn't have to manually assign affinities. rt_mutex_acquire is part of the native Xenomai API and definitely should not result in a mode switch. The best way to debug is to use rt_task_inquire (or the posix equivalent if you are using the posix task API) to figure out where your program is switching back to secondary mode. Note that if your task is running in secondary mode (non-realtime), then a call to rt_mutex_acquire will switch it back to primary mode, causing a mode switch. So it is most likely that something else in the code is causing a switch to secondary mode and then it switches back to primary when rt_mutex_acquire is called. EDIT: oops, didn't read your last reply carefully enough. Yes, the socket writes are definitely switching your thread to secondary and then the rt_mutex_acquire is switching you back to primary mode. The solution is to move all socket writes to a background thread. We typically allocate our own shared memory between socket threads and RT threads and use some simple request/acknowledge flags to synchronize data transfer via the shared memory.
  10. rvanderbijl: Great question. This is something my company bumped up against many years ago when we started using Power PMAC. The short answer is: yes, it is definitely possible to avoid mode switches with this type of architecture. When you say "shuttle data from one side of the fence -- Xenomai -- to Linux (and back)" this is exactly what shared memory is for. Accessing shared memory does not incur a mode switch (it will incur context switches, but not mode switches, labeled MSW in /proc/xenomai/stat). But you need to ensure that your process running within Xenomai's primary mode is not the thread that is doing the TCP/IP communications. This is because each call to the OS network stack switches the current thread from primary mode to secondary mode. Therefore you need to separate your real-time code from your TCP/IP code. You can communicate between these two tasks using 1) DT Shared memory, 2) allocate your own shared memory, or 3) implement a message queue. Here is a function I use to switch my realtime threads to primary mode: For Xenomai native task library: #include int task_switch_to_primary( void ) { // returns <0 if thread did not switch to primary mode #ifdef T_CONFORMING return rt_task_set_mode(0, T_CONFORMING, NULL); #else return rt_task_set_mode(0, T_PRIMARY, NULL); #endif } For Xenomai posix task library: #include int task_switch_to_primary( void ) { // returns <0 if thread did not switch to primary mode return pthread_set_mode_np(0, PTHREAD_PRIMARY); } Also make sure that you are calling mlockall() while initializing your realtime task: err = mlockall(MCL_CURRENT | MCL_FUTURE); One of our original concerns was that making our TCP/IP loop a background (non-realtime) thread would incur serious performance penalties. So far this hasn't been a problem, and using a polling architecture with SELECT() we are able to get 250+ microsecond updates between Ethernet packets (depending on payload size), albeit with marginal jitter depending on CPU load. 1ms updates should be no problem at all as long as the target on the other end can handle it (we did testing a while back and found that for Windows-to-Power-PMAC we could do ~1-2ms updates with some jitter depending on the Windows machine CPU usage, but for Power-PMAC-to-Other-Realtime-CPU we could do ~250us updates reliably). I would also be more concerned about the unresponsive Power PMAC. I have never had mode switches cause a Power PMAC to become unresponsive (even with code that switches mode many times each second), but I have seen this problem when I didn't allocate memory properly or when I would try to do "funny" things in kernel mode (typically as part of user servo or user phase code). You may already know this, but if you plug a serial cable into the Power PMAC it will give you some dump information when the Power PMAC crashes and this can help you troubleshoot where the problem is occurring.
  11. usamasiraj: My company has done a lot of work with LabVIEW and Power PMAC. However, we typically write our own communication drivers using raw TCP/UDP instead of using Delta Tau's gpascii over TCP. If you understand LabVIEW's TCP primitives then writing your own driver to talk to gpascii would be fairly simple.
  12. We have also seen the same issue at a customer's site. In the end it was tracked down to the customer's server. Their IT department was using a server tool that would randomly scan for ports it doesn't recognize and close them to prevent unauthorized network traffic. We could also use ifdown+ifup to temporarily recover. I never got a technical answer from the IT guys about what they did to fix the issue.
  13. I would check the cflags in the makefile for the 465 build. Are they passing the '- g' flag? Also check that optimizations are turned on (I think Delta Tau typically uses '-O2 ').
  14. daves: If you are ambitious and really want to get to the bottom of this, upload both files to the Power PMAC and dump their executables to assembly (objdump -S [FILE] > source.lst). Then compare the two and see what the differences are.
×
×
  • Create New...